summaryrefslogtreecommitdiffstats
diff options
context:
space:
mode:
authorKr1ss2020-04-20 17:40:26 +0200
committerKr1ss2020-04-20 17:40:26 +0200
commit756855a0d511c6e537a534e4dc639672f3ec7248 (patch)
tree8aba170bc1a143ab363bf22573451bec902e5369
parent6954f7dba5469e4ce6c3cfd0f99a098a3b2e0c28 (diff)
downloadpackages-756855a0d511c6e537a534e4dc639672f3ec7248.tar.gz
packages-756855a0d511c6e537a534e4dc639672f3ec7248.tar.bz2
packages-756855a0d511c6e537a534e4dc639672f3ec7248.zip
update changelog
-rw-r--r--.SRCINFO2
-rw-r--r--ChangeLog194
-rw-r--r--PKGBUILD2
3 files changed, 9 insertions, 189 deletions
diff --git a/.SRCINFO b/.SRCINFO
index 91f3e9f..b36817d 100644
--- a/.SRCINFO
+++ b/.SRCINFO
@@ -1,7 +1,7 @@
1pkgbase = wapiti 1pkgbase = wapiti
2 pkgdesc = A comprehensive web app vulnerability scanner written in Python 2 pkgdesc = A comprehensive web app vulnerability scanner written in Python
3 pkgver = 3.0.3 3 pkgver = 3.0.3
4 pkgrel = 1 4 pkgrel = 2
5 url = http://wapiti.sourceforge.net/ 5 url = http://wapiti.sourceforge.net/
6 changelog = ChangeLog 6 changelog = ChangeLog
7 arch = any 7 arch = any
diff --git a/ChangeLog b/ChangeLog
index 25d0b58..75cba8b 100644
--- a/ChangeLog
+++ b/ChangeLog
@@ -1,3 +1,9 @@
120/02/2020
2 Wapiti 3.0.3
3 An important work was made to reduce false positives in XSS detections.
4 That research involved scanning more than 1 million websites to discover those issues.
5 More details here: http://devloop.users.sourceforge.net/index.php?article217/one-crazy-month-of-web-vulnerability-scanning
6
102/09/2019 702/09/2019
2 Wapiti 3.0.2 8 Wapiti 3.0.2
3 New XXE module cans end payloads in parameters, query string, file uploads and raw body. 9 New XXE module cans end payloads in parameters, query string, file uploads and raw body.
@@ -26,7 +32,7 @@
26 Fixed issue #54 in lamejs JS parser. 32 Fixed issue #54 in lamejs JS parser.
27 Removed lxml and libxml2 as a dependency. That parser have difficulties to parse exotic encodings. 33 Removed lxml and libxml2 as a dependency. That parser have difficulties to parse exotic encodings.
28 34
2903/01/2018 3503/01/2017
30 Release of Wapiti 3.0.0 36 Release of Wapiti 3.0.0
31 37
3202/01/2018 3802/01/2018
@@ -298,189 +304,3 @@
298 304
29925/04/2006: 30525/04/2006:
300 Version 1.0.0 306 Version 1.0.0
30103/01/2018
302 Release of Wapiti 3.0.0
303
30423/12/2017
305 lswww is now renamed to Crawler.
306 All HTML parsing is now made with BeautifulSoup. lxml should be the parsing engine but it's possible to opt-out at
307 setup with --html5lib.
308 Analysis on JS in event handlers (onblur, onclick, etc)
309 Changed behavior ot 'page' scope, added 'url' scope.
310 Default mime type used for upload fields is image/gif.
311 Added yaswf as a dependency for SWF parsing.
312 Custom HTTP error codes check.
313 Fixed a bug with 'button' input types.
314 Updated pynarcissus with a python3 version for js parsing.
315 Rewrote "in scope" check.
316
31729/12/2009
318 Version 2.3.1
319 Fixed a bug in lswww if root url is not given complete.
320 Fixed a bug in lswww with a call to BeautifulSoup made on non text files.
321 Fixed a bug that occured when verbosity = 2. Unicode error on stderr.
322
32327/12/2009
324 Version 2.3.0
325 Internationalization and translation to english and spanish when called from
326 Wapiti.
327 Ability to save a scan session and restore it later (-i)
328 Added option -b to set the scope of the scan based on the root url given as
329 argument.
330 Fixed bug ID 2779441 "Python Version 2.5 required?"
331 Use an home made cookie library instead or urllib2's one.
332 Keep aditionnal informations on the webpages (headers + encoding)
333 Use BeautifulSoup to detect webpage encoding and handle parsing errors.
334 Fixed a bug when "a href" or "form action" have an empty string as value.
335 Better support of Unicode.
336
33726/03/2009
338 Version 2.2.0
339 Fixed bug ID 2433127 with HTTP 404 error codes.
340 Don't let httplib2 manage HTTP redirections : return the status code
341 and let lswww handle the new url.
342
34325/03/2009
344 Version 2.1.9
345 Added option -e (or --export)
346 Saves urls and forms data to a XML file.
347 We hope other fuzzers will allow importation of this file.
348
34924/03/2009
350 More verifications on timeout errors.
351
35222/03/2009
353 Version 2.1.8
354 Fixed bug ID: 2415094
355 Check on protocol found in hyperlinks was case-sentitive.
356 Moved it to non-case-sensitive.
357 Integration of a second linkParser class called linkParser2 from
358 lswwwv2.py. This parser use only regexp to extract links and forms.
359
36025/11/2008
361 httplib2 use lowercase names for the HTTP headers in opposition to
362 urllib2 (first letter was uppercase).
363 Changed the verifications on headers.
364
36515/11/2008
366 Fixed a bug with links going to parrent directory.
367
36802/11/2008
369 Better integration of proxy support provided by httplib2.
370 It's now possible to use SOCKS proxies.
371
37219/10/2008
373 Version 2.1.7
374 Now use httplib2 (http://code.google.com/p/httplib2/)n MIT licence
375 instead of urllib2.
376 The ability to use persistents connections makes the scan faster.
377
37809/10/2008
379 Version 2.1.6
380 HTTP authentification now works
381 Added the option -n (or --nice) to prevent endless loops during scanning
382
38328/01/2007
384 Version 2.1.5
385 First take a look at the Content-Type instead of the document extension
386 Added BeautifulSoup as an optionnal module to correct bad html documents
387 (better use tidy if you can)
388
38924/10/2006
390 Version 2.1.4
391 Wildcard exclusion with -x (--exclude) option
392
39322/10/2006
394 Fixed an error with url parameters handling that appeared in precedent
395 version.
396 Fixed a typo in lswww.py (setAuthCreddentials : one 'd' is enough)
397
39807/10/2006
399 Version 2.1.3
400 Three verbose mode with -v (--verbose) option
401 0: print only results
402 1: print dots for each page accessed (default mode)
403 2: print each found url durring scan
404 Timeout in seconds can be set with -t (--timeout) option
405 Fixed bug "crash when no content-type is returned"
406 Fixed an error with 404 webpages
407 Fixed a bug when the only parameter of an url is a forbidden one
408
40909/08/2006
410 Version 2.1.2
411 Fixed a bug with regular expressions
412
41305/08/2006
414 Version 2.1.1
415 Remove redundant slashes from urls
416 (e.g. http://server/dir//page.php converted to
417 http://server/dir/page.php)
418
41920/07/2006
420 Version 2.1.0 with urllib2
421
42211/07/2006
423 -r (--remove) option to remove parameters from URLs
424 Generate URL with GET forms instead of using POST by default
425 Support for Basic HTTP Auth added but don't work with Python 2.4.
426 Now use cookie files (option "-c file" or "--cookie file")
427 Extracts links from Location header fields
428
429
43006/07/2006
431 Extract links from "Location:" headers (HTTP 301 and 302)
432 Default type for "input" elements is set to "text"
433 (as written in the HTML 4.0 specifications)
434 Added "search" in input types (created for Safari browsers)
435
43604/07/2006
437 Fixed a bug with empty parameters tuples
438 (convert http://server/page?&a=2 to http://server/page?a=2)
439
44023/06/2006
441 Version 2.0.1
442 Take care of the "submit" type
443 No extra data sent when a page contains several forms
444 Corrected a bug with urls finishing by '?'
445 Support Cookies !!
446
44725/04/2006
448 Version 2.0
449 Extraction des formulaires sous la forme d'une liste de tuples
450 contenant chacun un string (url du script cible) et un dict
451 contenant les noms des champs et leur valeur par d�faut (ou 'true'
452 si vide)
453 Recense les scripts gerant l'upload
454 Peut maintenant fonctionner comme module
455
45619/04/2006
457 Version 1.1
458 Lecture des tags insensible a la casse
459 Gestion du Ctrl+C pour interrompre proprement le programme
460 Extraction des urls dans les balises form (action)
461
46212/10/2005
463 Version 1.0
464 Gestion des liens syntaxiquement valides mais pointant
465 vers des ressources inexistantes (404)
466
46711/09/2005
468 Beta4
469 Utilisation du module getopt qui permet de specifier
470 facilement les urls a visiter en premier, les urls a
471 exclure (nouveau !) ou encore le proxy a utiliser
472
47324/08/2005
474 Beta3
475 Ajout d'un timeout pour la lecture des pages pour ne pas
476 bloquer sur un script bugge
477
47823/08/2005
479 Version beta2
480 Prise en charge des indexs generes par Apache
481 Filtre sur les protocoles
482 Gestion des liens qui remontent l'arborescence
483 Gestion des liens vides
484
48502/08/2005
486 Sortie de la beta1
diff --git a/PKGBUILD b/PKGBUILD
index 003ed40..ed59f67 100644
--- a/PKGBUILD
+++ b/PKGBUILD
@@ -5,7 +5,7 @@
5pkgname=wapiti 5pkgname=wapiti
6 6
7pkgver=3.0.3 7pkgver=3.0.3
8pkgrel=1 8pkgrel=2
9 9
10pkgdesc='A comprehensive web app vulnerability scanner written in Python' 10pkgdesc='A comprehensive web app vulnerability scanner written in Python'
11arch=('any') 11arch=('any')