How to make Beautiful Soup output HTML entities?

I'm trying to sanitize and XSS-proof some HTML input from the client. I'm using Python 2.6 with Beautiful Soup. I parse the input, strip all tags and attributes not in a whitelist, and transform the tree back into a string.

However...

>>> unicode(BeautifulSoup('text < text'))
u'text < text'

That doesn't look like valid HTML to me. And with my tag stripper, it opens the way to all sorts of nastiness:

>>> print BeautifulSoup('<script>alert("xss")<script>').prettify()
<

script>alert("xss")<

script>

The pairs will be removed, and what remains is not only an XSS attack, but even valid HTML as well.

The obvious solution is to replace all characters by < that, after parsing, are found not to belong to a tag (and similar for >&'"). But the Beautiful Soup documentation only mentions the parsing of entities, not the producing of them. Of course I can run a replace over all NavigableString nodes, but since I might miss something, I'd rather let some tried and tested code do the work.

Why doesn't Beautiful Soup escape (and other magic characters) by default, and how do I make it do that?


N.B. I've also looked at lxml.html.clean. It seems to work on the basis of blacklisting, not whitelisting, so it doesn't seem very safe to me. Tags can be whitelisted, but attributes cannot, and it allows too many attributes for my taste (e.g. tabindex). Also, it gives an AssertionError on the input . Not good.

Suggestions for other ways to clean HTML are also very welcome. I'm hardly the only person in the world trying to do this, yet there seems to be no standard solution.

11
задан Thomas 10 September 2010 в 12:50
поделиться