Quantcast
Channel: Planet Grep
Viewing all 4959 articles
Browse latest View live

Bjorn Monnens: Clean install mac Sierra still eating enormous amounts of memory

$
0
0

In my computer lifetime, I’ve used Windows 3,NT,95,… 10, Linux (mostly Gnome from 2002 onwards) and since late 200X Mac. Currently I’m mostly using Mac because it fits nice between Windows and Linux.

At a certain time I was doing a demo on my Linux laptop and I got a blue screen in front of a client. Pretty much a No No for everybody … So I decided to switch to Mac and ever since no problems doing demo’s and presentations.

My Mac is already pretty old (read +3 years). As I’ve installed a lot on it, I decided to re-install Sierra. Try the good old Windows way, re-install and get a fast machine again. But what happened pretty much blew my mind …

I did a clean install and afterwards checked my laptops memory consumption … the bare system was still eating 7 Gigs of memory … Okay I may have 16 in total but why would my core OS without any additional installations need 7 gigs … This is utter madness and even a complete rip off. This would mean that in order to run a descent Mac Sierra you always need 16 gig to run at a descent speed … I’m following Microsoft pretty closely as they are more than ever focusing on good products and helping the opensource community (Did you know Microsoft is the #1 contributor to opensource software ???!!!). Even their Windows 10 is looking like a descent product again …

My current Mac is still doing fine, so I hope to postpone the purchase of a new machine somewhat to a far future … However if I need to purchase something at the moment, I wouldn’t know what to buy.
The new Macs really scare me. The touch bar is something that nobody seems  to find useful even worse, pretty much everybody I talked to hates it and finds it a waste of money … for myself I find Microsoft did a good job at Windows 10, however the command line interface is still not up to par with the on from Mac or Linux one … So if I should buy something right now, it would probably be a laptop with an SSD, I7, 16 gig or more diskspace that is completely supported on Linux. I would then be installing Linux Mint as that’s still the distro that seems for myself the most user friendly …. if you have other suggestions, just let me know !!!

Lets just hope Apple can turn it around, and can make descent desktopsagain that don’t eat all your memory for no un-necessary reason.


Fabian Arrotin: Deploying Openstack through puppet on CentOS 7 - a Journey

$
0
0

It's not a secret that I was playing/experimenting with OpenStack in the last days. When I mention OpenStack, I should even say RDO , as it's RPM packaged, built and tested on CentOS infra.

Now that it's time to deploy it in Production, that's when you should have a deeper look at how to proceed and which tool to use. Sure, Packstack can help you setting up a quick PoC but after some discussions with people hanging around in the #rdo irc channel on freenode, it seems that almost everybody agreed on the fact that it's not the kind of tool you want to use for a proper deploy.

Let's so have a look at the available options. While I really like/prefer Ansible, we (CentOS Project) still use puppet as our Configuration Management tool, and itself using Foreman as the ENC. So let's see both options.

  • Ansible : Lot of natives modules exist to manage an existing/already deployed openstack cloud, but nothing really that can help setting up one from scratch. OTOH it's true that Openstack Ansible exists, but that will setup openstack components into LXC containers, and wasn't really comfortable with the whole idea (YMMV)
  • Puppet : Lot of puppet modules so you can automatically reuse/import those into your existing puppet setup, and seems to be the prefered method when discussing with people in #rdo (when not using TripleO though)

So, after some analysis, and despite the fact that I really prefer Ansible over Puppet, I decided (so that it could still make sense in our infra) to go the "puppet modules way". That was the beginning of a journey, where I saw a lot of Yaks to shave too.

It started with me trying to "just" reuse and adapt some existing modules I found. Wrong. And it's even fun because it's one of my mantras : "Don't try to automate what you can't understand from scratch" (And I fully agree with Matthias' thought on this ).

So one can just read all the openstack puppet modules, and then try to understand how to assemble them together to build a cloud. But I remembered that Packstack itself is puppet driven. So I just decided to have a look at what it was generating and start from that to write my own module from scratch. How to proceed ? Easy : on a VM, just install packstack, generate answer file, "salt" it your needs, and generate the manifests :

 yum install -y centos-release-openstack-ocata && yum install openstack-packstack -y
 packstack --gen-answer-file=answers.txt
 vim answers.txt
 packstack --answer-file=answers.txt --dry-run
 * The installation log file is available at: /var/tmp/packstack/20170508-101433-49cCcj/openstack-setup.log
 * The generated manifests are available at: /var/tmp/packstack/20170508-101433-49cCcj/manifests

So now we can have a look at all the generated manifests and start from scratch our own, reimporting all the needed openstack puppet modules, and that's what I did .. but started to encounter some issues. The first one was that the puppet version we were using was 3.6.2 (everywhere on every release/arch we support, so centos 6 and 7, and x86_64,i386,aarch64,ppc64,ppc64le).

One of the openstack component is RabbitMQ but openstack modules rely on the puppetlabs module to deploy/manage it. You'll see a lot of those external modules being called/needed by openstack puppet. The first thing that I had to do was investigating our own modules as some are the same name, but not coming from puppetlabs/forge, so instead of analyzing all those, I moved everything RDO related to a different environment so that it wouldn't conflict with some our our existing modules. Back now to the RabbitMQ one : puppet errors where trying to just use it. First yak to shave : updating the whole CentOS infra puppet to higher version because of a puppet bug. Let's so rebuild puppet for centos 6/7 and with a higher version on CBS

That means of course testing our own modules, on our Test Foreman/puppetmasterd instance first, and as upgraded worked, I applied it everywhere. Good, so let's jump to the next yak.

After the rabbitmq issue was solved, I encountered other ones coming from openstack puppet modules now, as the .rb ruby code used for type/provider was expecting ruby2 and not 1.8.3, which was the one available on our puppetmasterd (yeah, our Foreman was on a CentOS 6 node) so another yak to shave : migrating our Foreman instance from CentOS 6 to a new CentOS 7 node. Basically installing a CentOS 7 node with the same Foreman version running on CentOS 6 node, and then following procedure, but then, again, time lost to test update/upgrade and also all other modules, etc (One can see why I prefer agentless cfgmgmt).

Finally I found that some of the openstack puppet modules aren't touching the whole config. Let me explain why. In Openstack Ocata, some things are mandatory, like the Placement API, but despite all the classes being applied, I had some issues to have it to run correctly when deploying an instance. It's true that I initially had a bug in my puppet code for the user/password to use to configure the rabbitmq settings, but it was solved and also applied correctly in /etc/nova/nova.conf (setting "transport_url=") . But openstack nova services (all nova-*.log files btw) were always saying that credentials given were refused by rabbitmq, while tested manually)

After having verified in the rabbitmq logs, I saw that despite what was configured in nova.conf, services were still trying to use the wrong user/pass to connect to rabbitmq. Strange as ::nova::cell_v2::simple_setup was included and was supposed also to use the transport_url declared at the nova.conf level (and so configured by ::nova) . That's how I discovered that something "ugly" happened : in fact even if you modify nova.conf, it stores some settings in the mysql DB, and you can see those (so the "wrong" ones in my case) with :

nova-manage cell_v2 list_cells --debug

Something to keep in mind, as for initial deployment, if your rabbitmq user/pass needs to be changed, and despite the fact that puppet will not complain, it will only update the conf file, but not the settings imported first by puppet in the DB (table nova_api.cell_mapping if you're interested) After that, everything was then running, and reinstalled/reprovisioned multiple times my test nodes to apply the puppet module/manifests from puppetmasterd to confirm.

That was quite a journey, but it's probably only the beginning but it's a good start. Now to investigate other option for cinder/glance as it seems Gluster was deprecated and I'd like to know hy.

Hope this helps if you need to bootstrap openstack with puppet !

Bjorn Monnens: JGitflow still alive?

$
0
0

During our development we are using the Gitflow process. (Some may prefer others, but this works well in the current situation). In order to do this we are using the JGitflow Maven plugin (developed by Atlassian)

If our software is ready to go out of the door, we make a release branch. This release branch is then filled with documentation with regards to the release (Jira tickets, git logs, build numbers, … pretty much everything to make it trackable inside the artifact).
From time to time the back merge of this release branch towards Develop would fail and cause us all kinds of trouble. By default the JGitflow Maven plugin tries to do a back merge and delete the branch. This was a behaviour we wanted to change.
So during my spare time (read nights) I decided it was time to do some nightly hacking. The result is a new maven option “backMergeToDevelop” that defaults to true but can be overridden.

I created the necessary tests to validate it works and created a pull request, however no follow up so far from Atlassian … Anybody from Atlassian reading this? Reach out to me so I can get this in the next release (if any is coming …?)

It seems there were no commits since 2015-09 …

 

Dries Buytaert: 7-Eleven using Drupal

$
0
0

7-Eleven is the largest convenience store chain in the world with 60,000 locations around the globe. That is more than any other retailer or food service provider. In conjunction with the release of its updated 7-Rewards program, 7-Eleven also relaunched its website and mobile application using Drupal 8! Check it out at https://www.7-eleven.com, and grab a Slurpee while you're at it!

Les Jeudis du Libre: Mons, le 15 juin : Intégration continue avec Jenkins

$
0
0

Logo JenkinsCe jeudi 15 juin 2017 à 19h se déroulera la 60ème séance montoise des Jeudis du Libre de Belgique.

Le sujet de cette séance : Intégration continue avec Jenkins

Thématique : Intégration continue|déploiement automatisé|Processus

Public : Tout public

L’animateur conférencier : Dimitri Durieux (CETIC)

Lieu de cette séance : Campus technique (ISIMs) de la Haute Ecole en Hainaut, Avenue V. Maistriau, 8a, Salle Académique, 2e bâtiment (cf. ce plan sur le site de l’ISIMs, et ici sur la carte Openstreetmap).

La participation sera gratuite et ne nécessitera que votre inscription nominative, de préférence préalable, ou à l’entrée de la séance. Merci d’indiquer votre intention en vous inscrivant via la page http://jeudisdulibre.fikket.com/. La séance sera suivie d’un verre de l’amitié.

Les Jeudis du Libre à Mons bénéficient aussi du soutien de nos partenaires : CETIC, OpenSides, MeaWeb et Phonoid.

Si vous êtes intéressé(e) par ce cycle mensuel, n’hésitez pas à consulter l’agenda et à vous inscrire sur la liste de diffusion afin de recevoir systématiquement les annonces.

Pour rappel, les Jeudis du Libre se veulent des espaces d’échanges autour de thématiques des Logiciels Libres. Les rencontres montoises se déroulent chaque troisième jeudi du mois, et sont organisées dans des locaux et en collaboration avec des Hautes Écoles et Facultés Universitaires montoises impliquées dans les formations d’informaticiens (UMONS, HEH et Condorcet), et avec le concours de l’A.S.B.L. LoLiGrUB, active dans la promotion des logiciels libres.

Description : Afin d’améliorer la gestion des ressources, la virtualisation et le Cloud ont révolutionné notre manière de concevoir et de déployer des applications informatiques. La complexité des systèmes informatiques a fortement évolué. Il ne s’agit plus de déployer et de maintenir une application monolithique mais un ensemble de services interagissant entre eux. Les développeurs sont donc confrontés à une forte complexité des tests, de l’intégration et du déploiement des applications.

A ce nouveau contexte technique s’ajoute des méthodologies de développement agiles, rapides et itératives. Elles favorisent un déploiement régulier de nouvelles versions des composants incorporant de nouvelles fonctionnalités. Il faut donc tester, intégrer et déployer à un rythme beaucoup plus soutenu qu’avant.

Le développeur doit donc réaliser très régulièrement des tâches de tests, d’intégration et de déploiement complexes. Il a besoin d’un outil et il en existe plusieurs. Dans le cadre de cette présentation, je propose de vous présenter l’outil Jenkins. Il s’agit de l’outil open-source disposant du plus grand nombre de possibilités de personnalisation à ce jour. Cette présentation sera l’occasion de présenter les bases de Jenkins et des pratiques dites d’intégration continue. Elle sera illustrée d’exemples concrets d’utilisation de la solution.

Jenkins est un outil installable sous forme de site web permettant la configuration de tâches automatisées. Ces tâches possibles sont nombreuses mais citons notamment la compilation ou la publication d’un module, l’intégration d’un système, les tests d’une fonctionnalité ou encore le déploiement dans l’environnement de production. Les tâches peuvent être déclenchées automatiquement selon le besoin. Par exemple, on peut réaliser un test d’intégration à chaque modification de la base de code d’un des modules. Les fonctionnalités restent assez simples et répondent à des besoins très précis mais il est possible de les combiner pour automatiser la plupart des tâches périphériques afin d’avoir plus de temps disponible pour le développement de votre logiciel.

Short Bio : Dimitri Durieux est un membre du CETIC spécialisé dans la qualité logicielle. Il a travaillé sur plusieurs projets de recherche dans des contexte à complexité et contraintes variées et aide les industries Wallonne à produire du code de meilleure qualité. Principalement, il a évalué la qualité du code et des pratiques de développement dans plus de 50 entreprises IT Wallonnes afin de les aider à mettre en place une gestion efficace de la qualité. Lors de son parcours, au CETIC, il a eu l’occasion de déployer et de maintenir plusieurs instances de Jenkins tout en démontrant la faisabilité de cas d’application complexes de cette technologie pour plusieurs sociétés Wallonnes.

Jeroen De Dauw: OOP file_get_contents

$
0
0

I’m happy to announce the immediate availability of FileFetcher 4.0.0.

FileFecther is a small PHP library that provides an OO way to retrieve the contents of files.

What’s OO about such an interface? You can inject an implementation of it into a class, thus avoiding that the class knows about the details of the implementation, and being able to choose which implementation you provide. Calling file_get_contents does not allow changing implementation as it is a procedural/static call making use of global state.

Library number 8234803417 that does this exact thing? Probably not. The philosophy behind this library is to provide a very basic interface (FileFetcher) that while insufficient for plenty of use cases, is ideal for a great many, in particular replacing procedural file_get_contents calls. The provided implementations are to facilitate testing and common generic tasks around the actual file fetching. You are encouraged to create your own core file fetching implementation in your codebase, presumably an adapter to a library that focuses on this task such as Guzzle.

So what is in it then? The library provides two trivial implementations of the FileFetcher interface at its heart:

  • SimpleFileFetcher: Adapter around file_get_contents
  • InMemoryFileFetcher: Adapter around an array provided to its constructor (construct with [] for a “throwing fetcher”)

It also provides a number of generic decorators:

Version 4.0.0 brings PHP7 features (scalar type hints \o/) and adds a few extra handy implementations. You can add the library to your composer.json (jeroen/file-fetcher) or look at the documentation on GitHub. You can also read about its inception in 2013.

Xavier Mertens: Identifying Sources of Leaks with the Gmail “+” Feature

$
0
0

For years, Google is offering two nice features with his gmail.com platform to gain more power of your email address. You can play with the “+” (plus) sign or “.” (dot) to create more email addresses linked to your primary one. Let’s take an example with John who’s the owner of john.doe@gmail.com. John can share the email address “john.doe+soccer@gmail.com” with his friends playing soccer or “john.doe+security@gmail.com”  to register on forums talking about information security. It’s the same with dots. Google just ignore them. So “john.doe@gmail.com” is the same as “john.d.oe@gmail.com”. Many people use the “+” format to optimize the flood of email they receive every day and automatically process it / store it in separate folders. That’s nice but it can also be very useful to discover where an email address is being used.

A few days ago, Troy Hunt, the owner of haveibeenpwned.com service (if you don’t know it yet, just have a look and register!), announced that new massive dumps were in the wild for a total of ~1B passwords! The new dumps are called “Exploit.In” (593M entries) and “Anti Public Combo List” (427M entries). The sources of the leaks are not clear. I grabbed a copy of the data and searched for Google “+” email addresses.

Not surprising, I found +28K unique accounts! I extracted strings after the “+” sign and indexed everything in Splunk:

Gmail Tags

As you can see, we recognise some known online services:

  • xtube (adult content)
  • friendster (social network)
  • filesavr (file exchange service in the cloud)
  • linkedin (social network)
  • bioware (gaming platform)

This does not mean that those platforms were breached (ok, LinkedIn was) but it can give some indicators…

Here is a dump of the top identified tags (with more than 3 characters to keep the list useful). You can download the complete CSV here.

TagCount
xtube

37

spam

18

filedropper

17

daz3d

12

bioware

11

friendster

10

savage

10

linkedin

8

eharmony

7

filesavr

6

bryce

5

savage2

5

porn

4

precyl

4

bravenet

3

comicbookdb

3

freebie

3

freebiejeebies

3

freebies

3

hackforums

3

junk

3

kffl

3

social

3

youporn

3

97979797

2

brice

2

dazstudio

2

detnews

2

eharm

2

free

2

gamigo

2

hack

2

heroesofnewerth

2

itickets

2

lists

2

luther

2

paygr

2

policeauctions

2

test

2

texasmonthly

2

toddy

2

trzy

2

usercash

2

xtube2

2

[The post Identifying Sources of Leaks with the Gmail “+” Feature has been first published on /dev/random]

Philip Van Hoof: How do they do it? Asynchronous undo and redo editors

$
0
0

Imagine we want an editor that has undo and redo capability. But the operations on the editor are all asynchronous. This implies that also undo and redo are asynchronous operations.

We want all this to be available in QML, we want to use QFuture for the asynchronous stuff and we want to use QUndoCommand for the undo and redo capability.

But how do they do it?

First of all we will make a status object, to put the status of the asynchronous operations in (asyncundoable.h).

class AbstractAsyncStatus:publicQObject{Q_OBJECTQ_PROPERTY(bool success READ success CONSTANT)Q_PROPERTY(int extra READ extra CONSTANT)public:
    AbstractAsyncStatus(QObject*parent):QObject(parent){}virtualbool success()=0;virtualint extra()=0;};

We will be passing it around as a QSharedPointer, so that lifetime management becomes easy. But typing that out is going to give us long APIs. So let’s make a typedef for that (asyncundoable.h).

typedef QSharedPointer<AbstractAsyncStatus> AsyncStatusPointer;

Now let’s make ourselves an undo command that allows us to wait for asynchronous undo and asynchronous redo. We’re combining QUndoCommand and QFutureInterface here (asyncundoable.h).

class AbstractAsyncUndoable:publicQUndoCommand{public:
    AbstractAsyncUndoable(QUndoCommand*parent =nullptr):QUndoCommand( parent ), m_undoFuture (new QFutureInterface<AsyncStatusPointer>()), m_redoFuture (new QFutureInterface<AsyncStatusPointer>()){}QFuture<AsyncStatusPointer> undoFuture(){return m_undoFuture->future();}QFuture<AsyncStatusPointer> redoFuture(){return m_redoFuture->future();}protected:
    QScopedPointer<QFutureInterface<AsyncStatusPointer>> m_undoFuture;
    QScopedPointer<QFutureInterface<AsyncStatusPointer>> m_redoFuture;};

Okay, let’s implement these with an example operation. First the concrete status object (asyncexample1command.h).

class AsyncExample1Status:public AbstractAsyncStatus
{Q_OBJECTQ_PROPERTY(bool example1 READ example1 CONSTANT)public:
    AsyncExample1Status (bool success,int extra,bool example1,QObject*parent =nullptr): AbstractAsyncStatus(parent), m_example1 ( example1 ), m_success ( success ), m_extra ( extra ){}bool example1(){return m_example1;}bool success() Q_DECL_OVERRIDE {return m_success;}int extra() Q_DECL_OVERRIDE {return m_extra;}private:bool m_example1 =false;bool m_success =false;int m_extra =-1;};

Let’s make a QUndoCommand that uses a timer to simulate asynchronous behavior. We could also use QtConcurrent’s run function to use a QThreadPool and QRunnable instances that also implement QFutureInterface, of course. Seasoned Qt developers know what I mean. For the sake of example, I wanted to illustrate that QFuture can also be used for asynchronous things that aren’t threads. We’ll use the lambda because QUndoCommand isn’t a QObject, so no easy slots. That’s the only reason (asyncexample1command.h).

class AsyncExample1Command:public AbstractAsyncUndoable
{public:
    AsyncExample1Command(bool example1,QUndoCommand*parent =nullptr): AbstractAsyncUndoable ( parent ), m_example1(example1){}void undo() Q_DECL_OVERRIDE {
        m_undoFuture->reportStarted();QTimer*timer =newQTimer();
        timer->setSingleShot(true);QObject::connect(timer,&QTimer::timeout,[=](){
            QSharedPointer<AbstractAsyncStatus> result;
            result.reset(new AsyncExample1Status (true,1, m_example1 ));
            m_undoFuture->reportFinished(&result);
            timer->deleteLater();});
        timer->start(1000);}void redo() Q_DECL_OVERRIDE {
        m_redoFuture->reportStarted();QTimer*timer =newQTimer();
        timer->setSingleShot(true);QObject::connect(timer,&QTimer::timeout,[=](){
            QSharedPointer<AbstractAsyncStatus> result;
            result.reset(new AsyncExample1Status (true,2, m_example1 ));
            m_redoFuture->reportFinished(&result);
            timer->deleteLater();});
        timer->start(1000);}private:QTimer m_timer;bool m_example1;};

Let’s now define something we get from the strategy design pattern; a editor behavior. Implementations provide an editor all its editing behaviors (abtracteditorbehavior.h).

class AbstractEditorBehavior :publicQObject{Q_OBJECTpublic:
    AbstractEditorBehavior(QObject*parent):QObject(parent){}virtualQFuture<AsyncStatusPointer> performExample1(bool example1 )=0;virtualQFuture<AsyncStatusPointer> performUndo()=0;virtualQFuture<AsyncStatusPointer> performRedo()=0;virtualbool canRedo()=0;virtualbool canUndo()=0;};

So far so good, so let’s make an implementation that has a QUndoStack and that therefor is undoable (undoableeditorbehavior.h).

class UndoableEditorBehavior:public AbstractEditorBehavior
{public:
    UndoableEditorBehavior(QObject*parent =nullptr): AbstractEditorBehavior (parent), m_undoStack (newQUndoStack){}QFuture<AsyncStatusPointer> performExample1(bool example1 ) Q_DECL_OVERRIDE {
        AsyncExample1Command *command =new AsyncExample1Command ( example1 );
        m_undoStack->push(command);return command->redoFuture();}QFuture<AsyncStatusPointer> performUndo(){const AbstractAsyncUndoable *undoable =dynamic_cast<const AbstractAsyncUndoable *>(
                    m_undoStack->command( m_undoStack->index()-1));
        m_undoStack->undo();returnconst_cast<AbstractAsyncUndoable*>(undoable)->undoFuture();}QFuture<AsyncStatusPointer> performRedo(){const AbstractAsyncUndoable *undoable =dynamic_cast<const AbstractAsyncUndoable *>(
                    m_undoStack->command( m_undoStack->index()));
        m_undoStack->redo();returnconst_cast<AbstractAsyncUndoable*>(undoable)->redoFuture();}bool canRedo() Q_DECL_OVERRIDE {return m_undoStack->canRedo();}bool canUndo() Q_DECL_OVERRIDE {return m_undoStack->canUndo();}private:
    QScopedPointer<QUndoStack> m_undoStack;};

Now we only need an editor, right (editor.h)?

class Editor:publicQObject{Q_OBJECTQ_PROPERTY(AbstractEditorBehavior* editorBehavior READ editorBehavior CONSTANT)public:
    Editor(QObject*parent=nullptr):QObject(parent), m_editorBehavior (new UndoableEditorBehavior ){}
    AbstractEditorBehavior* editorBehavior(){return m_editorBehavior.data();}
    Q_INVOKABLE void example1Async(bool example1){QFutureWatcher<AsyncStatusPointer>*watcher =newQFutureWatcher<AsyncStatusPointer>(this);connect(watcher,&QFutureWatcher<AsyncStatusPointer>::finished,this,&Editor::onExample1Finished);
        watcher->setFuture ( m_editorBehavior->performExample1(example1));}
    Q_INVOKABLE void undoAsync(){if(m_editorBehavior->canUndo()){QFutureWatcher<AsyncStatusPointer>*watcher =newQFutureWatcher<AsyncStatusPointer>(this);connect(watcher,&QFutureWatcher<AsyncStatusPointer>::finished,this,&Editor::onUndoFinished);
            watcher->setFuture ( m_editorBehavior->performUndo());}}
    Q_INVOKABLE void redoAsync(){if(m_editorBehavior->canRedo()){QFutureWatcher<AsyncStatusPointer>*watcher =newQFutureWatcher<AsyncStatusPointer>(this);connect(watcher,&QFutureWatcher<AsyncStatusPointer>::finished,this,&Editor::onRedoFinished);
            watcher->setFuture ( m_editorBehavior->performRedo());}}signals:void example1Finished( AsyncExample1Status *status );void undoFinished( AbstractAsyncStatus *status );void redoFinished( AbstractAsyncStatus *status );private slots:void onExample1Finished(){QFutureWatcher<AsyncStatusPointer>*watcher =dynamic_cast<QFutureWatcher<AsyncStatusPointer>*>(sender());emit example1Finished( watcher->result().objectCast<AsyncExample1Status>().data());
        watcher->deleteLater();}void onUndoFinished(){QFutureWatcher<AsyncStatusPointer>*watcher =dynamic_cast<QFutureWatcher<AsyncStatusPointer>*>(sender());emit undoFinished( watcher->result().objectCast<AbstractAsyncStatus>().data());
        watcher->deleteLater();}void onRedoFinished(){QFutureWatcher<AsyncStatusPointer>*watcher =dynamic_cast<QFutureWatcher<AsyncStatusPointer>*>(sender());emit redoFinished( watcher->result().objectCast<AbstractAsyncStatus>().data());
        watcher->deleteLater();}private:
    QScopedPointer<AbstractEditorBehavior> m_editorBehavior;};

Okay, let’s register this up to make it known in QML and make ourselves a main function (main.cpp).

#include <QtQml>#include <QGuiApplication>#include <QQmlApplicationEngine>#include <editor.h>intmain(int argc,char*argv[]){
    QGuiApplication app(argc, argv);
    QQmlApplicationEngine engine;
    qmlRegisterType<Editor>("be.codeminded.asyncundo",1,0,"Editor");
    engine.load(QUrl(QStringLiteral("qrc:/main.qml")));return app.exec();}

Now, let’s make ourselves a simple QML UI to use this with (main.qml).

import QtQuick 2.3import QtQuick.Window 2.2import QtQuick.Controls 1.2import be.codeminded.asyncundo 1.0
Window {
    visible:true
    width:360
    height:360
    Editor {
        id: editor
        onUndoFinished: text.text ="undo"
        onRedoFinished: text.text ="redo"
        onExample1Finished: text.text ="whoohoo "+ status.example1
    }
    Text {
        id: text
        text: qsTr("Hello World")
        anchors.centerIn: parent
    }
    Action {
        shortcut:"Ctrl+z"
        onTriggered: editor.undoAsync()}
    Action {
        shortcut:"Ctrl+y"
        onTriggered: editor.redoAsync()}
    Button  {
        onClicked: editor.example1Async(99);}}

You can find the sources of this complete example at github. Enjoy!


Xavier Mertens: [SANS ISC] When Bad Guys are Pwning Bad Guys…

$
0
0

I published the following diary on isc.sans.org: “When Bad Guys are Pwning Bad Guys…“.

A few months ago, I wrote a diary about webshells[1] and the numerous interesting features they offer. They’re plenty of web shells available, there are easy to find and install. They are usually delivered as one big obfuscated (read: Base64, ROT13 encoded and gzip’d) PHP file that can be simply dropped on a compromised computer. Some of them are looking nice and professional like the RC-Shell… [Read more]

[The post [SANS ISC] When Bad Guys are Pwning Bad Guys… has been first published on /dev/random]

Lionel Dricot: L’humanité a-t-elle trouvé le sens de la vie ?

$
0
0

Quel est le sens de la vie ? Pourquoi y’a-t-il des êtres vivants dans l’univers plutôt que de la matière inerte ? Pour ceux d’entre vous qui se sont déjà posé ces questions, j’ai une bonne et une mauvaise nouvelle.

La bonne, c’est que la science a peut-être trouvé une réponse.

La mauvaise, c’est que cette réponse ne va pas vous plaire.

Les imperfections du big bang

Si le big bang avait été un événement parfait, l’univers serait aujourd’hui uniforme et lisse. Or, des imperfections se sont créées.

À cause des forces fondamentales, ces imperfections se sont agglomérées jusqu’à former des étoiles et des planètes faites de matière.

Si nous ne sommes pas aujourd’hui une simple soupe d’atomes parfaitement lisse mais bien des êtres solides sur une planète entourée de vide, c’est grâce à ces imperfections !

La loi de l’entropie

Grâce à la thermodynamique, nous avons compris que rien ne se perd et rien ne se crée. L’énergie d’un système est constante. Pour refroidir son intérieur, un frigo devra forcément chauffer à l’extérieur. L’énergie de l’univers est et restera donc constante.

Il n’en va pas de même de l’entropie !

Pour faire simple, l’entropie peut être vue comme une « qualité d’énergie ». Au plus l’entropie est haute, au moins l’énergie est utilisable.

Par exemple, si vous placez une tasse de thé bouillante dans une pièce très froide, l’entropie du système est faible. Au fil du temps, la tasse de thé va se refroidir, la pièce se réchauffer et l’entropie va augmenter pour devenir maximale lorsque la tasse et la pièce seront à même température. Ce phénomène très intuitif serait dû à l’intrication quantique et serait à la base de notre perception de l’écoulement du temps.

Pour un observateur extérieur, la quantité d’énergie dans la pièce n’a pas changé. La température moyenne de l’ensemble est toujours la même. Par contre, il y’a quand même eu une perte : l’énergie n’est plus exploitable.

Il aurait été possible, par exemple, d’utiliser le fait que la tasse dé thé réchauffe l’air ambiant pour actionner une turbine et générer de l’électricité. Ce n’est plus possible une fois que la tasse et la pièce sont à la même température.

Sans apport d’énergie externe, tout système va voir son entropie augmenter. Il en va donc de même pour l’univers : si l’univers ne se contracte pas sous son propre poids, les étoiles vont inéluctablement se refroidir et s’éteindre comme la tasse de thé. L’univers deviendra, inexorablement, un continuum parfait de température constante. En anglais, on parle de “Heat Death”, la mort de la chaleur.

L’apparition de la vie

La vie semble être une exception. Après tout, ne sommes-nous pas des organismes complexes et très ordonnés, ce qui suppose une entropie très faible ? Comment expliquer l’apparition de la vie, et donc d’éléments à entropie plus faible que leur environnement, dans un univers dont l’entropie est croissante ?

Jeremy England, un physicien du MIT, apporte une solution nouvelle et particulièrement originale : la vie serait la manière la plus efficace de dissiper la chaleur et donc d’augmenter l’entropie.

Sur une planète comme la terre, les atomes et les molécules sont bombardés en permanence par une énergie forte et utilisable : le soleil. Ceci engendre une situation d’entropie très faible.

Naturellement, les atomes vont alors s’organiser pour dissiper l’énergie. Physiquement, la manière la plus efficace de dissiper l’énergie reçue est de se reproduire. En se reproduisant, la matière crée de l’entropie.

La première molécule capable d’une telle prouesse, l’ARN, fut la première étape de la vie. Les mécanismes de sélection naturelle favorisant la reproduction ont alors fait le reste.

Selon Jeremy England, la vie serait mécaniquement inéluctable pour peu qu’il y aie suffisamment d’énergie.

L’humanité au service de l’entropie

Si la théorie d’England se confirme, cela serait une très mauvaise nouvelle pour l’humanité.

Car si le but de la vie est de maximiser l’entropie, alors ce que nous faisons avec la terre, la consommation à outrance, les guerres, les bombes nucléaires sont parfaitement logiques. Détruire l’univers le plus vite possible pour en faire une soupe d’atomes est le sens même de la vie !

Le seul dilemme auquel nous pourrions faire face serait alors : devons-nous détruire la terre immédiatement ou arriver à nous développer pour apporter la destruction dans le reste de l’univers ?

Quoi qu’il en soit, l’objectif ultime de la vie serait de rentre l’univers parfait, insipide, uniforme. De se détruire elle-même.

Ce qui est particulièrement angoissant c’est que, vu sous cet angle, l’humanité semble y arriver incroyablement bien ! Beaucoup trop bien

 

Photo par Bardia Photography.

Vous avez aimé votre lecture ? Soutenez l’auteur sur Tipeee, Patreon, Paypal ou Liberapay. Même un don symbolique fait toute la différence ! Retrouvons-nous ensuite sur Facebook, Medium, Twitter ou Mastodon.

Ce texte est publié sous la licence CC-By BE.

Mattias Geniar: Ways the WannaCry ransomware could have been much worse

$
0
0

The post Ways the WannaCry ransomware could have been much worse appeared first on ma.ttias.be.

If you're in tech, you will have heard about the WannaCry/WannaCrypt ransomware doing the rounds. The infection started on Friday May 12th 2017 by exploiting MS17-010, a Windows Samba File Sharing vulnerability. The virus exploited a known vulnerability, installed a cryptolocker and extorted the owner of the Windows machine to pay ransom to get the files decrypted.

As far as worms go, this one went viral at an unprecedented scale.

But there are some design decisions in this cryptolocker that prevent it from being much worse. This post is a thought exercise, the next vulnerability will probably implement one of these methods. Make sure you're prepared.

Time based encryption

This WannaCry ransomware found the security vulnerability, installed the cryptolocker and immediately started encrypting the files.

Imagine the following scenario;

  • Day 1: worm goes round and infects vulnerable SMB, installs backdoor, keeps quiet, infects other machines
  • Day 14: worm activates itself, starts encrypting files

With WannaCrypt, it took a few hours to reach world-scale infections, alerting everyone and their grandmother that something big was going on. Mainstream media picked up on it. Train stations showed cryptolocker screens. Everyone started patching. What if the worm gets a few days head start?

By keeping quiet, the attacker risks getting caught, but in many cases this can be avoided by excluding known IPv4 networks for banks or government organizations. How many small businesses or large organizations do you think would notice a sudden extra running .exe in the background? Not enough to trigger world-wide coverage, I bet.

Self-destructing files

A variation to the scenario above;

  • Day 1: worm goes round, exploits SMB vulnerability, encrypts each file, but still allows files to remain opened (1)
  • Day 30: worm activates itself, removes decryption key for file access and prompts for payment

How are your back-ups at that point? All files on the machine have some kind of hidden time bomb in them. Every version of that file you have in back-up is affected. The longer they can keep that hidden, the bigger the damage.

More variations of this exist, with Excel or VBA macro's etc, and all boil down to: modify the file, render it unusable unless proper identification is shown.

(1) This should be possible with shortcuts to the files, first opening some kind of wrapper-script to decrypt the files before they launch. Decryption key is stored in memory and re-requested whenever the machine reboots, from its Command & Control servers.

Extortion with your friends

The current scheme is: your files get encrypted, you can pay to get your files back.

What if it's not your own files you're responsible for? What if are the files of your colleagues, family or friends? What if you had to pay 300$ to recover the files from someone you know?

Peer pressure works, especially if the blame angle is played. It's your fault someone you know got infected. Do you feel responsible at that point? Would that make you pay?

From a technical POV, it's tricky but not impossible to identify known associates for a victim. This could only happen a smaller scale, but might yield bigger rewards?

Cryptolocker + Windows Update DDoS?

Roughly 200.000 affected Windows PCs have been caught online. There are probably a lot more, that haven't made it to the online reports yet. Those are quite a few PCs to have control over, as an attacker.

The media is now jumping on the news, urging everyone to update. What if the 200k infected machines were to launch an effective DDoS against the Windows Update servers? With everyone trying to update, the possible targets are lowering every hour.

If you could effectively take down the means with which users can protect themselves, you can create bigger chaos and a bigger market to infect.

The next cryptolocker isn't going to be "just" a cryptolocker, in all likeliness it'll combine its encryption capacities with even more damaging means.

Stay safe

How to prevent any of these?

  1. Enable auto-updates on all your systems (!!)
  2. Have frequent back-ups, store them long enough

Want more details? Check out my earlier post: Staying Safe Online – A short guide for non-technical people.

The post Ways the WannaCry ransomware could have been much worse appeared first on ma.ttias.be.

Dries Buytaert: Think beyond with Acquia Labs

$
0
0

For most of the history of the web, the website has been the primary means of consuming content. These days, however, with the introduction of new channels each day, the website is increasingly the bare minimum. Digital experiences can mean anything from connected Internet of Things (IoT) devices, smartphones, chatbots, augmented and virtual reality headsets, and even so-called zero user interfaces which lack the traditional interaction patterns we're used to. More and more, brands are trying to reach customers through browserless experiences and push-, not pull-based, content— often by not accessing the website at all.

Last year, we launched a new initiative called Acquia Labs, our research and innovation lab, part of the Office of the CTO. Acquia Labs aims to link together the new realities in our market, our customers' needs in coming years, and the goals of Acquia's products and open-source efforts in the long term. In this blog post, I'll update you on what we're working on at the moment, what motivates our lab, and how to work with us.

Alexa, ask GeorgiaGov

One of the Acquia Labs' most exciting projects is our ongoing collaboration with GeorgiaGov Interactive. Through an Amazon Echo integration with the Georgia.gov Drupal website, citizens can ask their government questions. Georgia residents will be able to find out how to apply for a fishing license, transfer an out-of-state driver's license, and register to vote just by consulting Alexa, which will also respond with sample follow-up questions to help the user move forward. It's a good example of how conversational interfaces can change civic engagement. Our belief is that conversational content and commerce will come to define many of the interactions we have with brands.

The state of Georgia has always been on the forefront of web accessibility. For example, from 2002 until 2006, Georgia piloted a time-limited text-to-speech telephony service which would allow website information and popular services like driver's license renewal to be offered to citizens. Today, it publishes accessibility standards and works hard to make all of its websites accessible for users of assistive devices. This Alexa integration for Georgia will continue that legacy by making important information about working with state government easy for anyone to access.

And as a testament to the benefits of innovation in open source and our commitment to open-source software, Acquia Labs backported the Drupal 8 module for Amazon Echo to Drupal 7.

Here's a demo video showing an initial prototype of the Alexa integration:

Shopping with chatbots

In addition to physical devices like the Amazon Echo, Acquia Labs has also been thinking about what is ahead for chatbots, another important component of the conversational web. Unlike in-home devices, chatbots are versatile because they can be used across multiple channels, whether on a native mobile application or a desktop website.

The Acquia Labs team built a chatbot demonstrating an integration with the inventory system and recipe collection available on the Drupal website of an imaginary grocery store. In this example, a shopper can interact with a branded chatbot named "Freshbot" to accomplish two common tasks when planning an upcoming barbecue.

First, the user can use the chatbot to choose the best recipes from a list of recommendations with consideration for number of attendees, dietary restrictions, and other criteria. Second, the chatbot can present a shopping list with correct quantities of the ingredients she'll need for the barbecue. The ability to interact with a chatbot assistant rather than having to research and plan everything on your own can make hosting a barbecue a much easier and more efficient experience.

Check out our demo video, "Shopping with chatbots", below:

Collaborating with our customers

Many innovation labs are able to work without outside influence or revenue targets by relying on funding from within the organization. But this can potentially create too much distance between the innovation lab and the needs of the organization's customers. Instead, Acquia Labs explores new ideas by working on jointly funded projects for our clients.

I think this model for innovation is a good approach for the next generation of labs. This vision allows us to help our customers stake ground in new territory while also moving our own internal progress forward. For more about our approach, check out this video from a panel discussion with our Acquia Labs lead Preston So, who introduced some of these ideas at SXSW 2017.

If you're looking at possibilities beyond what our current offerings are capable of today, if you're seeking guidance and help to execute on your own innovation priorities, or if you have a potential project that interests you but is too forward-looking right now, Acquia Labs can help.

Special thanks to Preston So for contributions to this blog post and to Nikhil Deshpande (GeorgiaGov Interactive) and ASH Heath for feedback during the writing process.

Fabian Arrotin: Linking Foreman with Zabbix through MQTT

$
0
0

It's been a while since I thought about this design, but I finally had time to implement it the proper way, and "just in time" as I needed recently to migrate our Foreman instance to another host (from CentOS 6 to CentOS 7)

Within the CentOS Infra, we use Foreman as an ENC for our Puppet environments (multiple ones). For full automation between configuration management and monitoring, you need some "glue". The idea is that whatever you describe at the configuration management level should be authoritative and so automatically configuring the monitoring solution you have in place in your Infra.

In our case, that means that we have Foreman/puppet on one side, and Zabbix on the other side. Let's see how we can "link" the two sides.

What I've seen so far is that you use exported resources on each node, store that in another PuppetDB, and then on the monitoring node, reapply all those resources. Problem with such solution is that it's "expensive" and when one thinks about it, a little bit strange to export the "knowledge" from Foreman back into another DB, and then let puppet compiles a huge catalog at the monitoring side, even if nothing was changed.

One issue is also that in our Zabbix setup, we also have some nodes that aren't really managed by Foreman/puppet (but other automation around Ansible, so I had to use an intermediate step that other tools can also use/abuse for the same reason.

The other reason also is that I admit that I'm a fan of "event driven" configuration change, so my idea was :

  • update a host in Foreman (or groups of hosts, etc)
  • publish that change on a secure network through a message queue (so asynchronous so that it doesn't slow down the foreman update operation itself)
  • let Zabbix server know that change and apply it (like linking a template to a host)

So the good news is that it can be done really easily with several components :

Here is a small overview of the process :

Foreman MQTT Zabbix

Foreman hooks

Setting up foreman hooks is really easy: just install the pkg itself (tfm-rubygem-foreman_hooks.noarch), read the Documentation, and then create your scripts. There are some examples for Bash and python in the examples directory, but basically you just need to place some scripts at specific place[s]. In my case I wanted to "trigger" an event in the case of a node update (like adding a puppet class, or variable/paramater change) so I just had to place it under /usr/share/foreman/config/hooks/host/managed/update/.

One little remark though : if you put a new file, don't forget to restart foreman itself, so that it picks that hooks file, otherwise it would still be ignored and so not ran.

Mosquitto

Mosquitto itself is available in your favorite rpm repo, so installing it is a breeze. Reason why I selected mosquitto is that it's very lightweight (package size is under 200Kb), it supports TLS and ACL out-of-the box.

For an introduction to MQTT/Mosquitto, I'd suggest you to read Jan-Piet Mens dedicated blog post around it I even admit that I discovered it by attending one of his talks on the topic, back in the Loadays.org days :-)

Zabbix-cli

While one can always discuss "Raw API" with Zabbix, I found it useful to use a tool I was already using for various tasks around Zabbix : zabbix-cli For people interested in using it on CentOS 6 or 7, I built the packages and they are on CBS

So I plumbed it in a systemd unit file that subscribe to specific MQTT topic, parse the needed informations (like hostname and zabbix templates to link, unlink, etc) and then it updates that in Zabbix itself (from the log output):

[+] 20170516-11:43 :  Adding zabbix template "Template CentOS - https SSL Cert Check External" to host "dev-registry.lon1.centos.org" 
[Done]: Templates Template CentOS - https SSL Cert Check External ({"templateid":"10105"}) linked to these hosts: dev-registry.lon1.centos.org ({"hostid":"10174"})

Cool, so now I don't have to worry about forgetting to tie a zabbix template to a host , as it's now done automatically. No need to say that the deployment of those tools was of course automated and coming from Puppet/foreman :-)

Jan De Dobbeleer: Java two dot oh

$
0
0

I have to admit it, I’m not the biggest fan of Java. But, when they asked me to prepare a talk for 1st grade students who are currently learning to code using Java, I decided it was time to challenge some of my prejudices. As I selected continuous integration as the topic of choice, I started out by looking at all available tools to quickly setup a reliable Java project. Having played with dotnet core the past months, I was looking for a tool that could do a bit of the same. A straightforward CLI interface that can create a project out of the box to mess around with. Maven provided to be of little help, but gradle turned out to be exactly what I was looking for. Great, I gained some faith.

It’s only while creating my slides and looking for tooling that can be used specifically for Java, that I had an epiphany. What if it is possible to create an entire developer environment using docker? So no need for local dependencies like linting tools or gradle. No need to mess with an IDE to get everything set up. And, no more “it works on my machine”. The power and advantages of a CI tool, straight onto your own computer.

A quick search on Google points us to gradle’s own Alpine linux container. It comes with JDK8 out of the box, exactly what we’re looking for. You can create a new Java application with a single command:

docker run -v=$(pwd):/app --workdir=/app gradle:alpine gradle init --type java-application

This starts a container, creates a volume linked to your current working directory and initializes a brand new Java application using gradle init --type java-application. As I don’t feel like typing those commands all the time, I created a makefile to help me build and debug the app. Yes, you can debug the app while it’s running in the container. Java supports remote debugging out of the box. Any modern IDE that supports Java, has support for remote debugging. Simply run the make debug command and attach to the remote debugging session on port 1044.

ROOT_DIR:=$(shell dirname $(realpath $(lastword $(MAKEFILE_LIST))))

build:
    docker run --rm -v=${ROOT_DIR}:/app --workdir=/app gradle:alpine gradle clean build

debug: build
    docker run --rm -v=${ROOT_DIR}:/app -p 1044:1044 --workdir=/app gradle:alpine java -classpath /app/build/classes/main -verbose -agentlib:jdwp=transport=dt_socket,server=y, suspend=y,address=1044 App

Now that we have a codebase that uses the same tools to build, run and debug, we need to bring our coding standard to a higher level. First off we need a linting tool. Traditionally, people look at checkstyle when it comes to Java. And while that could be fine for you, I found that tool rather annoying to set up. XML is not something I like to mess with other than to create UI, so seeing this verbose config set me back. There simply wasn’t time to look at that. Even with the 2 different style guides, it would still require a bit of tweaking to get everything right and make the build pass.

As it turns out, there are other tools out there which feel a bit more 21st century. One of those is coala. Now, coala can be used as a linting tool on a multitude of languages, not just Java, so definetly take a look at it, even if you’re not into Java yourself. It’s a Python based tool which has a lot of neat little bears who can do things. The config is a breeze as it’s a yaml file, and they provide a container so you can run the checks in an isolated environment. All in all, exactly what we’re looking for.

Let’s extend our makefile to run coala:

docker run --rm -v=${ROOT_DIR}:/app --workdir=/app coala/base coala --ci -V

I made sure to enable verbose logging, simply to be able to illustrate the tool to students. Feel free to disable that. You can easily control what coala needs to verify by creating a .coafile in the root of the repository. One of the major advantages to use coala over anything else, is that it can do both simple linting checks as well as full on static code analysis.

Let’s have a look at the settings I used to illustrate its power.

[Default]
files = src/**/*.java
language = java

[SPACES]
bears = SpaceConsistencyBear
use_spaces = True

[TODOS]
bears = KeywordBear

[PMD]
bears = JavaPMDBear
check_optimizations = true
check_naming = false

You can start out by defining a default. In my case, I’m telling coala to look for .java files which are written using Java. There are three bears being used. SpaceConsistencyBear, who will check for spaces and not tabs. KeywordBear, who dislikes //TODO comments in code, and JavaPMDBear, who invokes PMD to do some static code analysis. In the example, I had to set check_naming = false otherwise I would have lost a lot of time fixing those error (mostly due to my proper lack of Java knowledge).

Now, whenever I want to validate my code and enforce certain rules for me and my team, I can use coala to achieve this. Simply run make validate and it will start the container and invoke coala. At this point, we can setup the CI logic in our makefile by simply combining the two commands.

ci: validate build

The command make ci will invoke coala and if all goes well, use gradle to build and test the file. As a cherry on top, I also included test coverage. Using Jacoco, you can easily setup rules to fail the build when the coverage goes below a certain threshold. The tool is integrated directly into gradle and provides everything you need out of the box, simply add the following lines to your build.gradle file. This way, the build will fail if the coverage drops below 50%.

apply plugin: 'jacoco'

jacocoTestReport {
    reports {
        xml.enabled true
        html.enabled true
    }
}

jacocoTestCoverageVerification {
    violationRules {
        rule {
            limit {
                minimum = 0.5
            }
        }
    }
}

check.dependsOn jacocoTestCoverageVerifica

Make sure to edit the build step in the makefile to also include Jacoco.

build:
    docker run --rm -v=${ROOT_DIR}:/app --workdir=/app gradle:alpine gradle clean build jacocoTestReport

The only thing we still need to do is select a CI service of choice. I made sure to add examples for both circleci and travis, each of which only require docker and an override to use our makefile instead of auto-detecting gradle and running that. The way we set up this project allows us to easily switch CI when we need to, which is not all that strange given the lifecycle of a software project. The tools we choose when we start out, might be selected to fit the needs at the time of creation, but nothing assures us that will stay true forever. Designing for change is not something we need to do in code alone, it has a direct impact on everything, so expect things to change and your assumptions to be challenged.

Have a look at the source code for all the info and the build files for the two services. Enjoy!

Source code

Philip Van Hoof: The rules of scuba diving

$
0
0
  • First rule. You must understand the rules of scuba diving. If you don’t know or understand the rules of scuba diving, go to the second rule.
  • The second rule is that you never dive alone.
  • The third rule is that you always keep close enough to each other to perform a rescue of any kind.
  • The forth rule is that you signal each other and therefor know each other’s signals. Underwater, communication is key.
  • The fifth rule is that you tell the others, for example, when you don’t feel well. The others want to know when you emotionally don’t feel well. Whenever you are insecure, you tell them. This is hard.
  • The sixth rule is that you don’t violate earlier agreed upon rules.
  • The seventh rule is that given rules will be eclipsed the moment any form of panic occurs, you will restore the rules using rationalism first, pragmatism next but emotional feelings last. No matter what.
  • The eighth rule is that the seventh rule is key to survival.

These rules make scuba diving an excellent learning school for software development project managers.


Xavier Mertens: [SANS ISC] My Little CVE Bot

$
0
0

I published the following diary on isc.sans.org: “My Little CVE Bot“.

The massive spread of the WannaCry ransomware last Friday was another good proof that many organisations still fail to patch their systems. Everybody admits that patching is a boring task. They are many constraints that make this process very difficult to implement and… apply! That’s why any help is welcome to know what to patch and when… [Read more]

[The post [SANS ISC] My Little CVE Bot has been first published on /dev/random]

Sven Vermeulen: Matching MD5 SSH fingerprint

$
0
0

Today I was attempting to update a local repository, when SSH complained about a changed fingerprint, something like the following:

@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
@    WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED!     @
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
IT IS POSSIBLE THAT SOMEONE IS DOING SOMETHING NASTY!
Someone could be eavesdropping on you right now (man-in-the-middle attack)!
It is also possible that a host key has just been changed.
The fingerprint for the ECDSA key sent by the remote host is
SHA256:p4ZGs+YjsBAw26tn2a+HPkga1dPWWAWX+NEm4Cv4I9s.
Please contact your system administrator.
Add correct host key in /home/user/.ssh/known_hosts to get rid of this message.
Offending ECDSA key in /home/user/.ssh/known_hosts:9
ECDSA host key for 192.168.56.101 has changed and you have requested strict checking.
Host key verification failed.

I checked if the host was changed recently, or the alias through which I connected switched host, or the SSH key changed. But that wasn't the case. Or at least, it wasn't the case recently, and I distinctly remember connecting to the same host two weeks ago.

Now, what happened I don't know yet, but I do know I didn't want to connect until I reviewed the received SSH key fingerprint. I obtained the fingerprint from the administration (who graceously documented it on the wiki)...

... only to realize that the documented fingerprint are MD5 hashes (and in hexadecimal result) whereas the key shown by the SSH command shows it in base64 SHA256 by default.

Luckily, a quick search revealed this superuser post which told me to connect to the host using the FingerprintHash md5 option:

~$ ssh -o FingerprintHash=md5 192.168.56.11

The result is SSH displaying the MD5 hashed fingerprint which I can now validate against the documented one. Once I validated that the key is the correct one, I accepted the change and continued with my endeavour.

I later discovered (or, more precisely, have strong assumptions) that I had an old elliptic curve key registered in my known_hosts file, which was not used for the communication for quite some time. I recently re-enabled elliptic curve support in OpenSSH (with Gentoo's USE="-bindist") which triggered the validation of the old key.

Dries Buytaert: Friduction: the internet's unstoppable drive to eliminate friction

$
0
0

There is one significant trend that I have noticed over and over again: the internet's continuous drive to mitigate friction in user experiences and business models.

Since the internet's commercial debut in the early 90s, it has captured success and upset the established order by eliminating unnecessary middlemen. Book stores, photo shops, travel agents, stock brokers, bank tellers and music stores are just a few examples of the kinds of middlemen who have been eliminated by their online counterparts. The act of buying books, printing photos or booking flights online alleviates the friction felt by consumers who must stand in line or wait on hold to speak to a customer service representative.

Rather than negatively describing this evolution as disintermediation or taking something away, I believe there is value in recognizing that the internet is constantly improving customer experiences by reducing friction from systems — a process I like to call "friduction".

Open Source and cloud

Over the past 15 years, I have observed Open Source and cloud-computing solutions remove friction from legacy approaches to technology. Open Source takes the friction out of the technology evaluation and adoption process; you are not forced to get a demo or go through a sales and procurement process, or deal with the limitations of a proprietary license. Cloud computing also took off because it also offers friduction; with cloud, companies pay for what they use, avoid large up-front capital expenditures, and gain speed-to-market.

Cross-channel experiences

There is a reason why Drupal's API-first initiative is one of the topics I've talked and written the most about in 2016; it enables Drupal to "move beyond the page" and integrate with different user engagement systems that can eliminate inefficiencies and improve the user experience of traditional websites.

We're quickly headed to a world where websites are evolving into cross­channel experiences, which includes push notifications, conversational UIs, and more. Conversational UIs, such as chatbots and voice assistants, will prevail because they improve and redefine the customer experience.

Personalization and contextualization

In the 90s, personalization meant that websites could address authenticated users by name. I remember the first time I saw my name appear on a website; I was excited! Obviously personalization strategies have come a long way since the 90s. Today, websites present recommendations based on a user's most recent activity, and consumers expect to be provided with highly tailored experiences. The drive for greater personalization and contextualization will never stop; there is too much value in removing friction from the user experience. When a commerce website can predict what you like based on past behavior, it eliminates friction from the shopping process. When a customer support website can predict what question you are going to ask next, it is able to provide a better customer experience. This is not only useful for the user, but also for the business. A more efficient user experience will translate into higher sales, improved customer retention and better brand exposure.

To keep pace with evolving user expectations, tomorrow's digital experiences will need to deliver more tailored, and even predictive customer experiences. This will require organizations to consume multiple sources of data, such as location data, historic clickstream data, or information from wearables to create a fine-grained user context. Data will be the foundation for predictive analytics and personalization services. Advancing user privacy in conjunction with data-driven strategies will be an important component of enhancing personalized experiences. Eventually, I believe that data-driven experiences will be the norm.

At Acquia, we started investing in contextualization and personalization in 2014, through the release of a product called Acquia Lift. Adoption of Acquia Lift has grown year over year, and we expect it to increase for years to come. Contextualization and personalization will become more pervasive, especially as different systems of engagements, big data, the internet of things (IoT) and machine learning mature, combine, and begin to have profound impacts on what the definition of a great user experience should be. It might take a few more years before trends like personalization and contextualization are fully adopted by the early majority, but we are patient investors and product builders. Systems like Acquia Lift will be of critical importance and premiums will be placed on orchestrating the optimal customer journey.

Conclusion

The history of the web dictates that lower-friction solutions will surpass what came before them because they eliminate inefficiencies from the customer experience. Friduction is a long-term trend. Websites, the internet of things, augmented and virtual reality, conversational UIs — all of these technologies will continue to grow because they will enable us to build lower-friction digital experiences.

Xavier Mertens: Your Password is Already In the Wild, You Did not Know?

$
0
0

There was a lot of buzz about the leak of two huge databases of passwords a few days ago. This has been reported by Try Hunt on his blog. The two databases are called “Anti-Trust-Combo-List” and “Exploit.In“. If the sources of the leaks are not officially known, there are some ways to discover some of them (see my previous article about the “+” feature offered by Google).

A few days after the first leak, a second version of “Exploit.In” was released with even more passwords:

Name

Size

Credentials

Anti-Trust-Combo-List

16GB

540.701.509

Exploit.In

15GB

499.305.318

Exploit.In (2)

24GB

805.499.579

With the huge of amount of passwords released in the wild, you can assume that your password is also included. But what are those passwords? I used Robbin Wood‘s tool pipal to analyze those passwords.

I decided to analyze the Anti-Trust-Combo-List but I had to restart several times due to a lack of resources (pipal requires a lot of memory to generate the statistics) and it failed always. I decided to use a sample of the passwords. I successfully analyzed 91M passwords. The results generated by pipal are available below.

What can we deduce? Weak passwords remain classic. Most passwords have only 8 characters and are based on lowercase characters. Interesting fact: users like to “increase” the complexity of the password by adding trailing numbers:

  • Just one number (due to the fact that they have to change it regularly and just increase it at every expiration)
  • By adding their birth year
  • By adding the current year
Basic Results

Total entries = 91178452
Total unique entries = 40958257

Top 20 passwords
123456 = 559283 (0.61%)
123456789 = 203554 (0.22%)
passer2009 = 186798 (0.2%)
abc123 = 100158 (0.11%)
password = 96731 (0.11%)
password1 = 84124 (0.09%)
12345678 = 80534 (0.09%)
12345 = 76051 (0.08%)
homelesspa = 74418 (0.08%)
1234567 = 68161 (0.07%)
111111 = 66460 (0.07%)
qwerty = 63957 (0.07%)
1234567890 = 58651 (0.06%)
123123 = 52272 (0.06%)
iloveyou = 51664 (0.06%)
000000 = 49783 (0.05%)
1234 = 35583 (0.04%)
123456a = 34675 (0.04%)
monkey = 32926 (0.04%)
dragon = 29902 (0.03%)

Top 20 base words
password = 273853 (0.3%)
passer = 208434 (0.23%)
qwerty = 163356 (0.18%)
love = 161514 (0.18%)
july = 148833 (0.16%)
march = 144519 (0.16%)
phone = 122229 (0.13%)
shark = 121618 (0.13%)
lunch = 119449 (0.13%)
pole = 119240 (0.13%)
table = 119215 (0.13%)
glass = 119164 (0.13%)
frame = 118830 (0.13%)
iloveyou = 118447 (0.13%)
angel = 101049 (0.11%)
alex = 98135 (0.11%)
monkey = 97850 (0.11%)
myspace = 90841 (0.1%)
michael = 88258 (0.1%)
mike = 82412 (0.09%)

Password length (length ordered)
1 = 54418 (0.06%)
2 = 49550 (0.05%)
3 = 247263 (0.27%)
4 = 1046032 (1.15%)
5 = 1842546 (2.02%)
6 = 15660408 (17.18%)
7 = 14326554 (15.71%)
8 = 25586920 (28.06%)
9 = 12250247 (13.44%)
10 = 11895989 (13.05%)
11 = 2604066 (2.86%)
12 = 1788770 (1.96%)
13 = 1014515 (1.11%)
14 = 709778 (0.78%)
15 = 846485 (0.93%)
16 = 475022 (0.52%)
17 = 157311 (0.17%)
18 = 136428 (0.15%)
19 = 83420 (0.09%)
20 = 93576 (0.1%)
21 = 46885 (0.05%)
22 = 42648 (0.05%)
23 = 31118 (0.03%)
24 = 29999 (0.03%)
25 = 25956 (0.03%)
26 = 14798 (0.02%)
27 = 10285 (0.01%)
28 = 10245 (0.01%)
29 = 7895 (0.01%)
30 = 12573 (0.01%)
31 = 4168 (0.0%)
32 = 66017 (0.07%)
33 = 1887 (0.0%)
34 = 1422 (0.0%)
35 = 1017 (0.0%)
36 = 469 (0.0%)
37 = 250 (0.0%)
38 = 231 (0.0%)
39 = 116 (0.0%)
40 = 435 (0.0%)
41 = 45 (0.0%)
42 = 57 (0.0%)
43 = 14 (0.0%)
44 = 47 (0.0%)
45 = 5 (0.0%)
46 = 13 (0.0%)
47 = 1 (0.0%)
48 = 16 (0.0%)
49 = 14 (0.0%)
50 = 21 (0.0%)
51 = 2 (0.0%)
52 = 1 (0.0%)
53 = 2 (0.0%)
54 = 22 (0.0%)
55 = 1 (0.0%)
56 = 3 (0.0%)
57 = 1 (0.0%)
58 = 2 (0.0%)
60 = 10 (0.0%)
61 = 3 (0.0%)
63 = 3 (0.0%)
64 = 1 (0.0%)
65 = 2 (0.0%)
66 = 9 (0.0%)
67 = 2 (0.0%)
68 = 2 (0.0%)
69 = 1 (0.0%)
70 = 1 (0.0%)
71 = 3 (0.0%)
72 = 1 (0.0%)
73 = 1 (0.0%)
74 = 1 (0.0%)
76 = 2 (0.0%)
77 = 1 (0.0%)
78 = 1 (0.0%)
79 = 3 (0.0%)
81 = 3 (0.0%)
83 = 1 (0.0%)
85 = 1 (0.0%)
86 = 1 (0.0%)
88 = 1 (0.0%)
89 = 1 (0.0%)
90 = 6 (0.0%)
92 = 3 (0.0%)
93 = 1 (0.0%)
95 = 1 (0.0%)
96 = 16 (0.0%)
97 = 1 (0.0%)
98 = 3 (0.0%)
99 = 2 (0.0%)
100 = 1 (0.0%)
104 = 1 (0.0%)
107 = 1 (0.0%)
108 = 1 (0.0%)
109 = 1 (0.0%)
111 = 2 (0.0%)
114 = 1 (0.0%)
119 = 1 (0.0%)
128 = 377 (0.0%)

Password length (count ordered)
8 = 25586920 (28.06%)
6 = 15660408 (17.18%)
7 = 14326554 (15.71%)
9 = 12250247 (13.44%)
10 = 11895989 (13.05%)
11 = 2604066 (2.86%)
5 = 1842546 (2.02%)
12 = 1788770 (1.96%)
4 = 1046032 (1.15%)
13 = 1014515 (1.11%)
15 = 846485 (0.93%)
14 = 709778 (0.78%)
16 = 475022 (0.52%)
3 = 247263 (0.27%)
17 = 157311 (0.17%)
18 = 136428 (0.15%)
20 = 93576 (0.1%)
19 = 83420 (0.09%)
32 = 66017 (0.07%)
1 = 54418 (0.06%)
2 = 49550 (0.05%)
21 = 46885 (0.05%)
22 = 42648 (0.05%)
23 = 31118 (0.03%)
24 = 29999 (0.03%)
25 = 25956 (0.03%)
26 = 14798 (0.02%)
30 = 12573 (0.01%)
27 = 10285 (0.01%)
28 = 10245 (0.01%)
29 = 7895 (0.01%)
31 = 4168 (0.0%)
33 = 1887 (0.0%)
34 = 1422 (0.0%)
35 = 1017 (0.0%)
36 = 469 (0.0%)
40 = 435 (0.0%)
128 = 377 (0.0%)
37 = 250 (0.0%)
38 = 231 (0.0%)
39 = 116 (0.0%)
42 = 57 (0.0%)
44 = 47 (0.0%)
41 = 45 (0.0%)
54 = 22 (0.0%)
50 = 21 (0.0%)
48 = 16 (0.0%)
96 = 16 (0.0%)
49 = 14 (0.0%)
43 = 14 (0.0%)
46 = 13 (0.0%)
60 = 10 (0.0%)
66 = 9 (0.0%)
90 = 6 (0.0%)
45 = 5 (0.0%)
71 = 3 (0.0%)
56 = 3 (0.0%)
92 = 3 (0.0%)
79 = 3 (0.0%)
98 = 3 (0.0%)
63 = 3 (0.0%)
61 = 3 (0.0%)
81 = 3 (0.0%)
51 = 2 (0.0%)
58 = 2 (0.0%)
65 = 2 (0.0%)
53 = 2 (0.0%)
67 = 2 (0.0%)
68 = 2 (0.0%)
76 = 2 (0.0%)
111 = 2 (0.0%)
99 = 2 (0.0%)
73 = 1 (0.0%)
72 = 1 (0.0%)
74 = 1 (0.0%)
70 = 1 (0.0%)
69 = 1 (0.0%)
77 = 1 (0.0%)
78 = 1 (0.0%)
64 = 1 (0.0%)
109 = 1 (0.0%)
114 = 1 (0.0%)
119 = 1 (0.0%)
83 = 1 (0.0%)
107 = 1 (0.0%)
85 = 1 (0.0%)
86 = 1 (0.0%)
104 = 1 (0.0%)
88 = 1 (0.0%)
89 = 1 (0.0%)
57 = 1 (0.0%)
100 = 1 (0.0%)
55 = 1 (0.0%)
93 = 1 (0.0%)
52 = 1 (0.0%)
95 = 1 (0.0%)
47 = 1 (0.0%)
97 = 1 (0.0%)
108 = 1 (0.0%)

| 
 | 
 | 
 | 
 | 
 | 
 | 
 || 
 ||| 
 ||| 
 ||| 
 ||| 
 ||| 
 ||| 
 ||||| 
||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
000000000011111111112222222222333333333344444444445555555555666666666677
012345678901234567890123456789012345678901234567890123456789012345678901

One to six characters = 18900217 (20.73%)
One to eight characters = 58813691 (64.5'%)
More than eight characters = 32364762 (35.5%)

Only lowercase alpha = 25300978 (27.75%)
Only uppercase alpha = 468686 (0.51%)
Only alpha = 25769664 (28.26%)
Only numeric = 9526597 (10.45%)

First capital last symbol = 72550 (0.08%)
First capital last number = 2427417 (2.66%)

Single digit on the end = 13167140 (14.44%)
Two digits on the end = 14225600 (15.6%)
Three digits on the end = 6155272 (6.75%)

Last number
0 = 4370023 (4.79%)
1 = 12711477 (13.94%)
2 = 5661520 (6.21%)
3 = 6642438 (7.29%)
4 = 3951994 (4.33%)
5 = 4028739 (4.42%)
6 = 4295485 (4.71%)
7 = 4055751 (4.45%)
8 = 3596305 (3.94%)
9 = 4240044 (4.65%)

| 
 | 
 | 
 | 
 | 
 | 
 | 
 | | 
 ||| 
 ||| 
|||| ||| | 
|||||||||| 
|||||||||| 
|||||||||| 
|||||||||| 
|||||||||| 
0123456789

Last digit
1 = 12711477 (13.94%)
3 = 6642438 (7.29%)
2 = 5661520 (6.21%)
0 = 4370023 (4.79%)
6 = 4295485 (4.71%)
9 = 4240044 (4.65%)
7 = 4055751 (4.45%)
5 = 4028739 (4.42%)
4 = 3951994 (4.33%)
8 = 3596305 (3.94%)

Last 2 digits (Top 20)
23 = 2831841 (3.11%)
12 = 1570044 (1.72%)
11 = 1325293 (1.45%)
01 = 1036629 (1.14%)
56 = 1013453 (1.11%)
10 = 909480 (1.0%)
00 = 897526 (0.98%)
13 = 854165 (0.94%)
09 = 814370 (0.89%)
21 = 812093 (0.89%)
22 = 709996 (0.78%)
89 = 706074 (0.77%)
07 = 675624 (0.74%)
34 = 627901 (0.69%)
08 = 626722 (0.69%)
69 = 572897 (0.63%)
88 = 557667 (0.61%)
77 = 557429 (0.61%)
14 = 539236 (0.59%)
45 = 530671 (0.58%)

Last 3 digits (Top 20)
123 = 2221895 (2.44%)
456 = 807267 (0.89%)
234 = 434714 (0.48%)
009 = 326602 (0.36%)
789 = 318622 (0.35%)
000 = 316149 (0.35%)
345 = 295463 (0.32%)
111 = 263894 (0.29%)
101 = 225151 (0.25%)
007 = 222062 (0.24%)
321 = 221598 (0.24%)
666 = 201995 (0.22%)
010 = 192798 (0.21%)
777 = 164454 (0.18%)
011 = 141015 (0.15%)
001 = 138363 (0.15%)
008 = 137610 (0.15%)
999 = 129483 (0.14%)
987 = 126046 (0.14%)
678 = 123301 (0.14%)

Last 4 digits (Top 20)
3456 = 727407 (0.8%)
1234 = 398622 (0.44%)
2009 = 298108 (0.33%)
2345 = 269935 (0.3%)
6789 = 258059 (0.28%)
1111 = 148964 (0.16%)
2010 = 140684 (0.15%)
2008 = 111014 (0.12%)
2000 = 110456 (0.12%)
0000 = 108767 (0.12%)
2011 = 103328 (0.11%)
5678 = 102873 (0.11%)
4567 = 94964 (0.1%)
2007 = 94172 (0.1%)
4321 = 92849 (0.1%)
3123 = 92104 (0.1%)
1990 = 87828 (0.1%)
1987 = 87142 (0.1%)
2006 = 86640 (0.1%)
1991 = 86574 (0.09%)

Last 5 digits (Top 20)
23456 = 721648 (0.79%)
12345 = 261734 (0.29%)
56789 = 252914 (0.28%)
11111 = 116179 (0.13%)
45678 = 96011 (0.11%)
34567 = 90262 (0.1%)
23123 = 84654 (0.09%)
00000 = 81056 (0.09%)
54321 = 73623 (0.08%)
67890 = 66301 (0.07%)
21212 = 28777 (0.03%)
23321 = 28767 (0.03%)
77777 = 28572 (0.03%)
22222 = 27754 (0.03%)
55555 = 26081 (0.03%)
66666 = 25872 (0.03%)
56123 = 21354 (0.02%)
88888 = 19025 (0.02%)
99999 = 18288 (0.02%)
12233 = 16677 (0.02%)

Character sets
loweralphanum: 47681569 (52.29%)
loweralpha: 25300978 (27.75%)
numeric: 9526597 (10.45%)
mixedalphanum: 3075964 (3.37%)
loweralphaspecial: 1721507 (1.89%)
loweralphaspecialnum: 1167596 (1.28%)
mixedalpha: 981987 (1.08%)
upperalphanum: 652292 (0.72%)
upperalpha: 468686 (0.51%)
mixedalphaspecialnum: 187283 (0.21%)
specialnum: 81096 (0.09%)
mixedalphaspecial: 53882 (0.06%)
upperalphaspecialnum: 39668 (0.04%)
upperalphaspecial: 18674 (0.02%)
special: 14657 (0.02%)

Character set ordering
stringdigit: 41059315 (45.03%)
allstring: 26751651 (29.34%)
alldigit: 9526597 (10.45%)
othermask: 4189226 (4.59%)
digitstring: 4075593 (4.47%)
stringdigitstring: 2802490 (3.07%)
stringspecial: 792852 (0.87%)
digitstringdigit: 716311 (0.79%)
stringspecialstring: 701378 (0.77%)
stringspecialdigit: 474579 (0.52%)
specialstring: 45323 (0.05%)
specialstringspecial: 28480 (0.03%)
allspecial: 14657 (0.02%)

[The post Your Password is Already In the Wild, You Did not Know? has been first published on /dev/random]

Mattias Geniar: CentOS 7.4 to ship with TLS 1.2 + ALPN

$
0
0

The post CentOS 7.4 to ship with TLS 1.2 + ALPN appeared first on ma.ttias.be.

Oh happy days!

I've long been tracking the "Bug 1276310 -- (rhel7-openssl1.0.2) RFE: Need OpenSSL 1.0.2" issue, where Red Hat users are asking for an updated version of the OpenSSL package. Mainly to get TLS 1.2 and ALPN.

_openssl_ rebased to version 1.0.2k

The _openssl_ package has been updated to upstream version 1.0.2k, which provides a number of enhancements, new features, and bug fixes, including:

* Added support for the datagram TLS (DTLS) protocol version 1.2.

* Added support for the TLS automatic elliptic curve selection.

* Added support for the Application-Layer Protocol Negotiation (ALPN).

* Added Cryptographic Message Syntax (CMS) support for the following schemes: RSA-PSS, RSA-OAEP, ECDH, and X9.42 DH.

Note that this version is compatible with the API and ABI in the *OpenSSL* library version in previous releases of Red Hat Enterprise Linux 7.
RFE: Need OpenSSL 1.0.2

The ALPN support is needed because in the Chrome browser, server-side ALPN support is a dependency to support HTTP/2. Without it, Chrome users don't get to use HTTP/2 on your servers.

The newly updated packages for OpenSSL are targeting the RHEL 7.4 release, which -- as far as I'm aware -- has no scheduled release date yet. But I'll be waiting for it!

As soon as RHEL 7.4 is released, we should expect a CentOS 7.4 release soon after.

The post CentOS 7.4 to ship with TLS 1.2 + ALPN appeared first on ma.ttias.be.

Viewing all 4959 articles
Browse latest View live