Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Attribute Filter/Transform Callbacks #58

Open
jhillyerd opened this issue Jan 7, 2018 · 6 comments
Open

Attribute Filter/Transform Callbacks #58

jhillyerd opened this issue Jan 7, 2018 · 6 comments

Comments

@jhillyerd
Copy link

jhillyerd commented Jan 7, 2018

It would be helpful to have a hook to allow custom attribute filtering. I propose something much simpler than #24 that would integrate with the existing builder syntax:

// AttrTransform is a user provided function to manipulate the value
// of an attribute
type AttrTransform func(v string) string

func main() {
  tf := func(v string) string {
    return strings.ToUpper(v)
  }

  p := bluemonday.NewPolicy()
  p.TransformAttrs(tf, "style").OnElements("div")
}

This would provide a convenient way to extend bluemonday without requiring the user to parse the HTML before or after calling Sanitize()

My goal is to implement CSS style attribute filtering for my project without forking bluemonday.

@jhillyerd
Copy link
Author

jhillyerd commented Jan 7, 2018

I'm happy to submit a PR for this, but would like some idea of whether this is a direction the project is willing to go.

Another potential use-case for me is rewriting Content-ID URLs in MIME emails such that they will make a GET request to my webmail server, ex: <IMG SRC="cid:foo4*foo1@bar.net" ALT="IETF logo">

@buro9
Copy link
Member

buro9 commented Jan 7, 2018

extend bluemonday without requiring the user to parse the HTML before or after calling Sanitize()

The problem with this is that the transformations could introduce unsafe changes that Sanitize() would then fail to catch. It would in essence have to be a multi-pass thing internally where all transformations completed before Sanitize() were applied.

I've also found in the past that as soon as there is scope for multiple transformations to logically apply to the same element that it becomes necessary to provide a way to declare the sequence in which to transform things. A good example of this is within things like mod_security rules where to check things you need to perform normalisation of the input, decoding charsets, upper casing, stripping spaces... the order of some of these is subtly sensitive, in that if the transformation chain is performed in a different order the output differs. Your use-case may be simple, but the suggestion would allow for:

p.TransformAttrs(tf1, "style").OnElements("div")
p.TransformAttrs(tf2, "style").Globally
p.TransformAttrs(tf3, "style").OnElements("span","div")

Which starts to look like you would need to be thinking of what the transformations are doing in a lot more depth, because if any inspected the value before transforming (quite likely) then you'd open yourself up to some unexpected behaviour whenever a "div" attribute were transformed as now you have 3 funcs that will apply.

Thinking aloud though:

  • I'm open to PRs for CSS or SVG sanitization to be added by adding new policy features similar to the existing ones, and using third party parsers for CSS, SVG, etc.
  • I'm less open to PRs for transformations because they put the security at risk and would need to be multi-pass anyway (so why add complexity within this, goes against isolation and UNIX philosophy)
  • I'm open to PRs for streaming sanitization - which in theory would allow bluemonday to be chained to another package that then specialized in doing the transformations, i.e. bluemonday.Sanitize(rubytuesday.Transform(html))

The only reason I didn't initially do a streaming transformation was to enable support for HTML validation to be added too (requires balanced tree and full buffer, so I opted for the OWASP Java style interface as it implicitly buffers), but perhaps the benefit of streaming sanitization in enabling the chaining of non-buffering transformation and other packages is actually quite substantial.

@jhillyerd
Copy link
Author

I hadn't thought about the ordering problem for multiple transformations, you're right that would make things unpredictable.

It doesn't really bother me that bluemonday doesn't work on streams. What I don't like is having to tokenize/parse HTML multiple times to add functionality, that can be expensive. In my case I'm not storing sanitized HTML, the HTML is being sanitized each time it is accessed by a user.

Maybe a way to transform the content of html.Token prior to bluemonday processing it? There could only be a single callback function for all elements/attribs, but it should eliminate the ordering problem.

@buro9
Copy link
Member

buro9 commented Jan 7, 2018

Yeah, that last bit is roughly what I mean... that it might be possible construct a tokenizer and the token stream from that would be passed into the first thing (transform the things!), which spits out a token stream for the next processor (sanitize the things!).

One tokenizer, and one stream of tokens, being used by multiple packages. In the case of bluemonday to apply a sanitization policy, but other packages could do things like transformations, or data exfiltration security (see if people are leaking credit card patterns and mask them), etc.

@jhillyerd
Copy link
Author

That sounds great. I don't have much experience with the tokenizer, so I won't volunteer to build that.

Looks like there is a proposal for similar in Go for XML: golang/go#19480

@jhillyerd
Copy link
Author

jhillyerd commented Feb 27, 2018

I spent some time playing around with html.Tokenizer, and read through the Go proposal I mentioned above. What ended up going into Go 1.10 was:

// A TokenReader is anything that can decode a stream of XML tokens, including a
// Decoder.
//
// When Token encounters an error or end-of-file condition after successfully
// reading a token, it returns the token. It may return the (non-nil) error from
// the same call or return the error (and a nil token) from a subsequent call.
// An instance of this general case is that a TokenReader returning a non-nil
// token at the end of the token stream may return either io.EOF or a nil error.
// The next Read should return nil, io.EOF.
//
// Implementations of Token are discouraged from returning a nil token with a
// nil error. Callers should treat a return of nil, nil as indicating that
// nothing happened; in particular it does not indicate EOF.
type TokenReader interface {
	Token() (Token, error)
}

I've prototyped something similar for bluemonday, although I had to add a method to set the token source since the chain would have to be built within sanitize() to avoid a confusing API. Let me know if you think this is a good direction and I will spend more time on docs and tests.

I was also on the fence about how to specify the filters to bluemonday, it may make more sense to add them to a Policy than pass them into the Sanitize methods. WDYT?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants