diff options
Diffstat (limited to 'ChangeLog')
| -rw-r--r-- | ChangeLog | 194 |
1 files changed, 7 insertions, 187 deletions
| @@ -1,3 +1,9 @@ | |||
| 1 | 20/02/2020 | ||
| 2 | Wapiti 3.0.3 | ||
| 3 | An important work was made to reduce false positives in XSS detections. | ||
| 4 | That research involved scanning more than 1 million websites to discover those issues. | ||
| 5 | More details here: http://devloop.users.sourceforge.net/index.php?article217/one-crazy-month-of-web-vulnerability-scanning | ||
| 6 | |||
| 1 | 02/09/2019 | 7 | 02/09/2019 |
| 2 | Wapiti 3.0.2 | 8 | Wapiti 3.0.2 |
| 3 | New XXE module cans end payloads in parameters, query string, file uploads and raw body. | 9 | New XXE module cans end payloads in parameters, query string, file uploads and raw body. |
| @@ -26,7 +32,7 @@ | |||
| 26 | Fixed issue #54 in lamejs JS parser. | 32 | Fixed issue #54 in lamejs JS parser. |
| 27 | Removed lxml and libxml2 as a dependency. That parser have difficulties to parse exotic encodings. | 33 | Removed lxml and libxml2 as a dependency. That parser have difficulties to parse exotic encodings. |
| 28 | 34 | ||
| 29 | 03/01/2018 | 35 | 03/01/2017 |
| 30 | Release of Wapiti 3.0.0 | 36 | Release of Wapiti 3.0.0 |
| 31 | 37 | ||
| 32 | 02/01/2018 | 38 | 02/01/2018 |
| @@ -298,189 +304,3 @@ | |||
| 298 | 304 | ||
| 299 | 25/04/2006: | 305 | 25/04/2006: |
| 300 | Version 1.0.0 | 306 | Version 1.0.0 |
| 301 | 03/01/2018 | ||
| 302 | Release of Wapiti 3.0.0 | ||
| 303 | |||
| 304 | 23/12/2017 | ||
| 305 | lswww is now renamed to Crawler. | ||
| 306 | All HTML parsing is now made with BeautifulSoup. lxml should be the parsing engine but it's possible to opt-out at | ||
| 307 | setup with --html5lib. | ||
| 308 | Analysis on JS in event handlers (onblur, onclick, etc) | ||
| 309 | Changed behavior ot 'page' scope, added 'url' scope. | ||
| 310 | Default mime type used for upload fields is image/gif. | ||
| 311 | Added yaswf as a dependency for SWF parsing. | ||
| 312 | Custom HTTP error codes check. | ||
| 313 | Fixed a bug with 'button' input types. | ||
| 314 | Updated pynarcissus with a python3 version for js parsing. | ||
| 315 | Rewrote "in scope" check. | ||
| 316 | |||
| 317 | 29/12/2009 | ||
| 318 | Version 2.3.1 | ||
| 319 | Fixed a bug in lswww if root url is not given complete. | ||
| 320 | Fixed a bug in lswww with a call to BeautifulSoup made on non text files. | ||
| 321 | Fixed a bug that occured when verbosity = 2. Unicode error on stderr. | ||
| 322 | |||
| 323 | 27/12/2009 | ||
| 324 | Version 2.3.0 | ||
| 325 | Internationalization and translation to english and spanish when called from | ||
| 326 | Wapiti. | ||
| 327 | Ability to save a scan session and restore it later (-i) | ||
| 328 | Added option -b to set the scope of the scan based on the root url given as | ||
| 329 | argument. | ||
| 330 | Fixed bug ID 2779441 "Python Version 2.5 required?" | ||
| 331 | Use an home made cookie library instead or urllib2's one. | ||
| 332 | Keep aditionnal informations on the webpages (headers + encoding) | ||
| 333 | Use BeautifulSoup to detect webpage encoding and handle parsing errors. | ||
| 334 | Fixed a bug when "a href" or "form action" have an empty string as value. | ||
| 335 | Better support of Unicode. | ||
| 336 | |||
| 337 | 26/03/2009 | ||
| 338 | Version 2.2.0 | ||
| 339 | Fixed bug ID 2433127 with HTTP 404 error codes. | ||
| 340 | Don't let httplib2 manage HTTP redirections : return the status code | ||
| 341 | and let lswww handle the new url. | ||
| 342 | |||
| 343 | 25/03/2009 | ||
| 344 | Version 2.1.9 | ||
| 345 | Added option -e (or --export) | ||
| 346 | Saves urls and forms data to a XML file. | ||
| 347 | We hope other fuzzers will allow importation of this file. | ||
| 348 | |||
| 349 | 24/03/2009 | ||
| 350 | More verifications on timeout errors. | ||
| 351 | |||
| 352 | 22/03/2009 | ||
| 353 | Version 2.1.8 | ||
| 354 | Fixed bug ID: 2415094 | ||
| 355 | Check on protocol found in hyperlinks was case-sentitive. | ||
| 356 | Moved it to non-case-sensitive. | ||
| 357 | Integration of a second linkParser class called linkParser2 from | ||
| 358 | lswwwv2.py. This parser use only regexp to extract links and forms. | ||
| 359 | |||
| 360 | 25/11/2008 | ||
| 361 | httplib2 use lowercase names for the HTTP headers in opposition to | ||
| 362 | urllib2 (first letter was uppercase). | ||
| 363 | Changed the verifications on headers. | ||
| 364 | |||
| 365 | 15/11/2008 | ||
| 366 | Fixed a bug with links going to parrent directory. | ||
| 367 | |||
| 368 | 02/11/2008 | ||
| 369 | Better integration of proxy support provided by httplib2. | ||
| 370 | It's now possible to use SOCKS proxies. | ||
| 371 | |||
| 372 | 19/10/2008 | ||
| 373 | Version 2.1.7 | ||
| 374 | Now use httplib2 (http://code.google.com/p/httplib2/)n MIT licence | ||
| 375 | instead of urllib2. | ||
| 376 | The ability to use persistents connections makes the scan faster. | ||
| 377 | |||
| 378 | 09/10/2008 | ||
| 379 | Version 2.1.6 | ||
| 380 | HTTP authentification now works | ||
| 381 | Added the option -n (or --nice) to prevent endless loops during scanning | ||
| 382 | |||
| 383 | 28/01/2007 | ||
| 384 | Version 2.1.5 | ||
| 385 | First take a look at the Content-Type instead of the document extension | ||
| 386 | Added BeautifulSoup as an optionnal module to correct bad html documents | ||
| 387 | (better use tidy if you can) | ||
| 388 | |||
| 389 | 24/10/2006 | ||
| 390 | Version 2.1.4 | ||
| 391 | Wildcard exclusion with -x (--exclude) option | ||
| 392 | |||
| 393 | 22/10/2006 | ||
| 394 | Fixed an error with url parameters handling that appeared in precedent | ||
| 395 | version. | ||
| 396 | Fixed a typo in lswww.py (setAuthCreddentials : one 'd' is enough) | ||
| 397 | |||
| 398 | 07/10/2006 | ||
| 399 | Version 2.1.3 | ||
| 400 | Three verbose mode with -v (--verbose) option | ||
| 401 | 0: print only results | ||
| 402 | 1: print dots for each page accessed (default mode) | ||
| 403 | 2: print each found url durring scan | ||
| 404 | Timeout in seconds can be set with -t (--timeout) option | ||
| 405 | Fixed bug "crash when no content-type is returned" | ||
| 406 | Fixed an error with 404 webpages | ||
| 407 | Fixed a bug when the only parameter of an url is a forbidden one | ||
| 408 | |||
| 409 | 09/08/2006 | ||
| 410 | Version 2.1.2 | ||
| 411 | Fixed a bug with regular expressions | ||
| 412 | |||
| 413 | 05/08/2006 | ||
| 414 | Version 2.1.1 | ||
| 415 | Remove redundant slashes from urls | ||
| 416 | (e.g. http://server/dir//page.php converted to | ||
| 417 | http://server/dir/page.php) | ||
| 418 | |||
| 419 | 20/07/2006 | ||
| 420 | Version 2.1.0 with urllib2 | ||
| 421 | |||
| 422 | 11/07/2006 | ||
| 423 | -r (--remove) option to remove parameters from URLs | ||
| 424 | Generate URL with GET forms instead of using POST by default | ||
| 425 | Support for Basic HTTP Auth added but don't work with Python 2.4. | ||
| 426 | Now use cookie files (option "-c file" or "--cookie file") | ||
| 427 | Extracts links from Location header fields | ||
| 428 | |||
| 429 | |||
| 430 | 06/07/2006 | ||
| 431 | Extract links from "Location:" headers (HTTP 301 and 302) | ||
| 432 | Default type for "input" elements is set to "text" | ||
| 433 | (as written in the HTML 4.0 specifications) | ||
| 434 | Added "search" in input types (created for Safari browsers) | ||
| 435 | |||
| 436 | 04/07/2006 | ||
| 437 | Fixed a bug with empty parameters tuples | ||
| 438 | (convert http://server/page?&a=2 to http://server/page?a=2) | ||
| 439 | |||
| 440 | 23/06/2006 | ||
| 441 | Version 2.0.1 | ||
| 442 | Take care of the "submit" type | ||
| 443 | No extra data sent when a page contains several forms | ||
| 444 | Corrected a bug with urls finishing by '?' | ||
| 445 | Support Cookies !! | ||
| 446 | |||
| 447 | 25/04/2006 | ||
| 448 | Version 2.0 | ||
| 449 | Extraction des formulaires sous la forme d'une liste de tuples | ||
| 450 | contenant chacun un string (url du script cible) et un dict | ||
| 451 | contenant les noms des champs et leur valeur par d�faut (ou 'true' | ||
| 452 | si vide) | ||
| 453 | Recense les scripts gerant l'upload | ||
| 454 | Peut maintenant fonctionner comme module | ||
| 455 | |||
| 456 | 19/04/2006 | ||
| 457 | Version 1.1 | ||
| 458 | Lecture des tags insensible a la casse | ||
| 459 | Gestion du Ctrl+C pour interrompre proprement le programme | ||
| 460 | Extraction des urls dans les balises form (action) | ||
| 461 | |||
| 462 | 12/10/2005 | ||
| 463 | Version 1.0 | ||
| 464 | Gestion des liens syntaxiquement valides mais pointant | ||
| 465 | vers des ressources inexistantes (404) | ||
| 466 | |||
| 467 | 11/09/2005 | ||
| 468 | Beta4 | ||
| 469 | Utilisation du module getopt qui permet de specifier | ||
| 470 | facilement les urls a visiter en premier, les urls a | ||
| 471 | exclure (nouveau !) ou encore le proxy a utiliser | ||
| 472 | |||
| 473 | 24/08/2005 | ||
| 474 | Beta3 | ||
| 475 | Ajout d'un timeout pour la lecture des pages pour ne pas | ||
| 476 | bloquer sur un script bugge | ||
| 477 | |||
| 478 | 23/08/2005 | ||
| 479 | Version beta2 | ||
| 480 | Prise en charge des indexs generes par Apache | ||
| 481 | Filtre sur les protocoles | ||
| 482 | Gestion des liens qui remontent l'arborescence | ||
| 483 | Gestion des liens vides | ||
| 484 | |||
| 485 | 02/08/2005 | ||
| 486 | Sortie de la beta1 | ||
