You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Our service is being used for phishing. This is probably mostly achieved by sending emails with Via URLs in it (is that true, are there IM or other phishing attacks?).
One suggestion from Roman at abuse.ch was to:
Another approach would be to figure out how normal, organic user traffic
looks like, create a base line for that and review what doesn't fit into
that base line. Just as an example: you could conduct some analysis on
the incoming traffic (HTTP referer). If you suddenly see a spike in
clicks from e.g. email service provider appear, it would be a signal
that someone is sending out spam emails that link to your service.
Honestly that sounds great, but quite hard to implement and very "active".
Trying to catch incoming traffic from email
A variation of that might be to try and detect if people are coming in from email servers using the Referer header. After a very brief test using gmail I couldn't spot a referrer, but that could be something to do with http://httpbin.org/get that I was using.
In general this type of approach would have two prongs:
Get a reliable list of web-mail providers
Work out how non web-mail programs like Outlook appear
We would then try and detect this and do something about it
Maybe flip that on it's head
Trying to catch all known email providers in the universe sounds like a tricky task. What we might be able to do instead of that is say "What does normal traffic look like?" Where to we get the majority of our incoming Via links from? If the intended use case is to allow people to share links on Twitter for example, we might expect lots of referrers coming from there (albeit indirectly through bouncer).
Once we have a picture of what "normal" looks like, we could take action when we see something abnormal. This would be similar to the allow list approach being considered for URLs for Checkmate.
What action to take?
Some different options:
The most drastic is to disallow access
This might be too un-reliable based on the Referer header only, as it can be dropped or faked
If we are concerned about phishing, an indirect screen which says "Continue on to annotate this page... only continue if you expect to be annotating" might do the job
This would make it very clear you aren't on the original site, but also give you a one click solution for carrying on
Maybe do this all the time?
If we are happy with a splash screen, we could do it 100% of the time. This would put some friction in for our users, but if we think it's effective at stopping phishing, it could be worth it. It's also miles easier for us to implement.
The text was updated successfully, but these errors were encountered:
The problem
Our service is being used for phishing. This is probably mostly achieved by sending emails with Via URLs in it (is that true, are there IM or other phishing attacks?).
One suggestion from Roman at
abuse.ch
was to:Honestly that sounds great, but quite hard to implement and very "active".
Trying to catch incoming traffic from email
A variation of that might be to try and detect if people are coming in from email servers using the Referer header. After a very brief test using gmail I couldn't spot a referrer, but that could be something to do with http://httpbin.org/get that I was using.
In general this type of approach would have two prongs:
For some places to start see:
We would then try and detect this and do something about it
Maybe flip that on it's head
Trying to catch all known email providers in the universe sounds like a tricky task. What we might be able to do instead of that is say "What does normal traffic look like?" Where to we get the majority of our incoming Via links from? If the intended use case is to allow people to share links on Twitter for example, we might expect lots of referrers coming from there (albeit indirectly through bouncer).
Once we have a picture of what "normal" looks like, we could take action when we see something abnormal. This would be similar to the allow list approach being considered for URLs for Checkmate.
What action to take?
Some different options:
Maybe do this all the time?
If we are happy with a splash screen, we could do it 100% of the time. This would put some friction in for our users, but if we think it's effective at stopping phishing, it could be worth it. It's also miles easier for us to implement.
The text was updated successfully, but these errors were encountered: