The past weeks have been difficult. I'm well aware that the community is struggling, and it really pains me. I respect the various opinions expressed, including opinions different from my own. I want you to know that I'm listening and that I'm carefully considering the different aspects of this situation. I'm doing my best to progress through the issues and support the work that needs to happen to evolve our governance model. For those that are attending DrupalCon Baltimore and want to help, we just added a community discussions track.
There is a lot to figure out, and I know that it's difficult when there are unresolved questions. Leading up to DrupalCon Baltimore next week, it may be helpful for people to know that Larry Garfield and I are talking. As members of the Community Working Group reported this week, Larry remains a member of the community. While we figure out Larry's future roles, Larry is attending DrupalCon as a regular community member with the opportunity to participate in sessions, code sprints and issue queues.
As we are about to kick off DrupalCon Baltimore, please know that my wish for this conference is for it to be everything you've made it over the years; a time for bringing out the best in each other, for learning and sharing our knowledge, and for great minds to work together to move the project forward. We owe it to the 3,000 people who will be in attendance to make DrupalCon about Drupal. To that end, I ask for your patience towards me, so I can do my part in helping to achieve these goals. It can only happen with your help, support, patience and understanding. Please join me in making DrupalCon Baltimore an amazing time to connect, collaborate and learn, like the many DrupalCons before it.
(I have received a lot of comments and at this time I just want to respond with an update. I decided to close the comments on this post.)
I published the following diary on isc.sans.org: “Analysis of a Maldoc with Multiple Layers of Obfuscation“.
Thanks to our readers, we get often interesting samples to analyze. This time, Frederick sent us a malicious Microsoft Word document called “Invoice_6083.doc” (which was delivered in a zip archive). I had a quick look at it and it was interesting enough for a quick diary… [Read more]
When I joined in November 2007, the news feed chronologically listed status updates from your friends. Great!
Since then, they’ve done every imaginable thing to increase time spent, also known as the euphemistic “engagement”. They’ve done this by surfacing friends’ likes, suggested likes, friends’ replies, suggested friends, suggested pages to like based on prior likes, and of course: ads. Those things are not only distractions, they’re actively annoying.
Instead of minimizing time spent so users can get back to their lives, Facebook has sacrificed users at the altar of advertising: more time spent = more ads shown.
No longer trustworthy
An entire spectrum of concerns to choose from, along two axes: privacy and walled garden. And of course, the interesting intersection of minimized privacy and maximized walled gardenness: the filter bubble.
It’s okay to not know everything that’s been going on. It makes for more interesting conversations when you do get to see each other again.
The older I get, the more I prefer the one communication medium that is not a walled garden, that I can control, back up and search: e-mail.
To be clear: I’m still very grateful for that opportunity. It was a great working environment and it helped my career! ↩︎
Before deleting my Facebook account, I scrolled through my entire news feed — apparently this time it was seemingly endless, and I stopped after the first 202 items in it, by then I’d had enough. Of those 202 items, 58 (28%) were by former or current Facebook employees. 40% (81) of it was reporting on a like or reply by somebody — which I could not care less about 99% of the time. And the remainder, well… the vast majority of it is mildly interesting at best. Knowing all the trivia in everybody’s lives is fatiguing and wasteful, not fascinating and useful. ↩︎
They’ve changed it since then, to: give people the power to share and make the world more open and connected.↩︎
Wim made a stir in the land of the web. Good for Wim that he rid himself of the shackles of social media.
But how will we bring a generation of people, who are now more or less addicted to social media, to a new platform? And what should that platform look like?
I’m not a anthropologist, but I believe human nature of organizing around new concepts and techniques is that we, humans, start central and monolithic. Then we fine-tune it. We figure out that the central organization and monolithic implementation of it becomes a limiting factor. Then we decentralize it.
The next step for all those existing and potential so-called ‘online services’ is to become fully decentralized.
Every family or home should have its own IMAP and SMTP server. Should that be JMAP instead? Probably. But that ain’t the point. The fact that every family or home will have its own, is. For chat, XMPP’s s2s is like SMTP. Postfix is an implementation of SMTP like ejabberd is for XMPP’s s2s. We have Cyrus, Dovecot and others for IMAP, which is the c2s of course. And soon we’ll probably have JMAP, too. Addressability? IPv6.
Why not something like this for social media? For the next online appliance, too? Augmented reality worlds can be negotiated in a distributed fashion. Why must Second Life necessarily be centralized? Surely we can run Linden Lab’s server software, locally.
Simple, because money is not interested in anything non-centralized. Not yet.
In the other news, the Internet stopped working truly well ever since money became its driving factor.
ps. The surest way to corrupt a youth is to instruct him to hold in higher esteem those who think alike than those who think different. Quote by Friedrich Nietzsche.
Mastonaut’s log, tootdate 10. We started out by travelling on board of mastodon.social. Being the largest one, we met with people from all over the fediverse. Some we could understand, others we couldn’t. Those were interesting days, I encountered a lot of people fleeing from other places to feel free and be themselves, while others were simply enjoying the ride. It wasn’t until we encountered the Pawoo, who turned out to have peculiar tastes when it comes to imagery, that the order in the fediverse got disturbed. But, as we can’t expect to get freedom while restricting others’, I fetched the plans to build my own instance. Ready to explore the fediverse and its inhabitants on my own, I set out on an exciting journey.
As I do not own a server myself, and still had $55 credit on Digital Ocean, I decided to setup a simple $5 Ubuntu 16.04 droplet to get started. This setup assumes you’ve got a domain name and I will even show you how to run Mastodon on a subdomain while identifying on the root domain. I suggest following the initial server setup to make sure you get started the right way. Once you’re all set, grab a refreshment and connect to your server through SSH.
Let’s start by ensuring we have everything we need to proceed. There are a few dependencies to run Mastodon. We need docker to run the different applications and tools in containers (easiest approach) and nginx to expose the apps to the outside world. Luckily, Digital Ocean has an insane amount of up-to-date documentation we can use. Follow these two guides and report back.
Change to that location and checkout the latest release (1.2.2 at the time of writing).
cd mastodon
git checkout 1.2.2
Now that we’ve got all this setup, we can build our containers. There’s a useful guide made by the Mastodon community I suggest you follow. Before we make this available to the outside world, we want to tweak our .env.production file to configure the instance. There are a few keys in there we need to adjust, and some we could adjust. In my case, Mastodon runs as a single user instance, meaning only one user is allowed in. Nobody can register and the home page redirects to that user’s profile instead of the login page. Below are the settings I adjusted, remember I run Mastodon on a subdomain mastodon.herebedragons.io, but my user identifies as @jan@herebedragons.io. The config changes below illustrate that behavior. If you have no use for that, just leave the WEB_DOMAIN key commented out. If you do need it however, you’ll still have to enter a redirect rule for your root domain that points https://rootdomain/.well-known/host-meta to https://subdomain.rootdomain/.well-known/host-meta. I added a rule on Cloudflare to achieve this, but any approach will do.
# Federation
LOCAL_DOMAIN=herebedragons.io
LOCAL_HTTPS=true
# Use this only if you need to run mastodon on a different domain than the one used for federation.
# Do not use this unless you know exactly what you are doing.
WEB_DOMAIN=mastodon.herebedragons.io
# Registrations
# Single user mode will disable registrations and redirect frontpage to the first profile
SINGLE_USER_MODE=true
As we can’t run a site without configuring SSL, we’ll use Let’s Encrypt to secure nginx. Follow the brilliant guide over at Digital Ocean and report back for the last part. Once setup, we need to configure nginx (and the DNS settings for your domain) to make Mastodon available for the world to enjoy. You can find my settings here. Just make sure to adjust the key file’s name and DNS settings. As I redirect all http traffic to https using Cloudflare, I did not bother to add port 80 to the config, be sure to add it if needed.
Alright, we’re ready to start exploring the fediverse! Make sure to restart nginx to apply the latest settings using sudo service nginx restart and update the containers to reflect your settings via docker-compose up -d. If all went according to plan, you should see your brand new shiny instance on your domain name. Create your first user and get ready to toot! In case you did not bother to add an smtp server, manually confirm your user:
docker-compose run --rm web rails mastodon:confirm_email USER_EMAIL=alice@alice.com
And make sure to give yourself ultimate admin powers to be able to configure your intance:
docker-compose run --rm web rails mastodon:make_admin USERNAME=alice
Updating is a straightforward process too. Fetch the latest changes from the remote, checkout the tag you want and update your containers:
docker-compose stop
docker-compose build
docker-compose run --rm web rails db:migrate
docker-compose run --rm web rails assets:precompile
docker-compose up -d
It doesn't deal very well with very large files, but that's fine; when
using things like git-annex or git-lfs, it's possible to deal with
very large files.
But what if you've added a file to git-lfs which didn't need to be
there? Let's say you installed git-lfs and told it to track all *.zip
files, but then it turned out that some of those files were really
small, and that the extra overhead of tracking them in git-lfs is
causing a lot of grief with your users. What do you do now?
With git-annex, the solution is simple: you just run git annex
unannex <filename>, and you're done. You may also need to tell the
assistant to no longer automatically add that file to the annex, but
beyond that, all is well.
With git-lfs, this works slightly differently. It's not much more
complicated, but it's not documented in the man page. The naive way to
do that would be to just run git lfs untrack; but when I tried that it
didn't work. Instead, I found that the following does work:
First, edit your .gitattributes files, so that the file which was
(erroneously) added to git-lfs is no longer matched against a
git-lfs rule in your .gitattributes.
Next, run git rm --cached <file>. This will tell git to mark the
file as removed from git, but crucially, will retain the file in the
working directory. Dropping the --cached will cause you to lose
those files, so don't do that.
Finally, run git add <file>. This will cause git to add the file to
the index again; but as it's now no longer being smudged up by
git-lfs, it will be committed as the normal file that it's supposed
to be.
I’m the kind of dev that dreads configuring webservers and that rather does not have to put up with random ops stuff before being able to get work done. Docker is one of those things I’ve never looked into, cause clearly it’s evil annoying boring evil confusing evil ops stuff. Two of my colleagues just introduced me to a one-line docker command that kind off blew my mind.
Want to run tests for a project but don’t have PHP7 installed? Want to execute a custom Composer script that runs both these tests and the linters without having Composer installed? Don’t want to execute code you are not that familiar with on your machine that contains your private keys, etc? Assuming you have Docker installed, this command is all you need:
docker run --rm --interactive --tty --volume $PWD:/app -w /app\
--volume ~/.composer:/composer --user $(id -u):$(id -g) composer composer ci
This command uses the Composer Docker image, as indicated by the first composer at the end of the command. After that you can specify whatever you want to execute, in this case composer ci, where ci is a custom composer Script. (If you want to know what the Docker image is doing behind the scenes, check its entry point file.)
This works without having PHP or Composer installed, and is very fast after the initial dependencies have been pulled. And each time you execute the command, the environment is destroyed, avoiding state leakage. You can create a composer alias in your .bash_aliases as follows, and then execute composer on your host just as you would do if it was actually installed (and running) there.
alias composer='docker run --rm --interactive --tty --volume $PWD:/app -w /app\
--volume ~/.composer:/composer --user $(id -u):$(id -g) composer composer'
Of course you are not limited to running Composer commands, you can also invoke PHPUnit
...(id -g) composer vendor/bin/phpunit
or indeed any PHP code.
...(id -g) composer php -r 'echo "hi";'
This one liner is not sufficient if you require additional dependencies, such as PHP extensions, databases or webservers. In those cases you probably want to create your own Docker file. Though to run the tests of most PHP libraries you should be good. I’ve now uninstalled my local Composer and PHP.
More and more developers are choosing content-as-a-service solutions known as headless CMSes — content repositories which offer no-frills editorial interfaces and expose content APIs for consumption by an expanding array of applications. Headless CMSes share a few common traits: they lack end-user front ends, provide few to no editorial tools for display and layout, and as such leave presentational concerns almost entirely up to the front-end developer. Headless CMSes have gained popularity because:
A desire to separate concerns of structure and presentation so that front-end teams and back-end teams can work independently from each other.
Editors and marketers are looking for solutions that can serve content to a growing list of channels, including websites, back-end systems, single-page applications, native applications, and even emerging devices such as wearables, conversational interfaces, and IoT devices.
Due to this trend among developers, many are rightfully asking whether headless CMSes are challenging the market for traditional CMSes. I'm not convinced that headless CMSes as they stand today are where the CMS world in general is headed. In fact, I believe a nuanced view is needed.
In this blog post, I'll explain why Drupal has one crucial advantage that propels it beyond the emerging headless competitors: it can be an exceptional CMS for editors who need control over the presentation of their content and a rich headless CMS for developers building out large content ecosystems in a single package.
As Drupal continues to power the websites that have long been its bread and butter, it is also used more and more to serve content to other back-end systems, single-page applications, native applications, and even conversational interfaces — all at the same time.
Headless CMSes are leaving editors behind
This diagram illustrates the differences between a traditional Drupal website and a headless CMS with various front ends receiving content.
Some claim that headless CMSes will replace traditional CMSes like Drupal and WordPress when it comes to content editors and marketers. I'm not so sure.
Where headless CMSes fall flat is in the areas of in-context administration and in-place editing of content. Our outside-in efforts, in contrast, aim to allow an editor to administer content and page structure in an interface alongside a live preview rather than in an interface that is completely separate from the end user experience. Some examples of this paradigm include dragging blocks directly into regions or reordering menu items and then seeing both of these changes apply live.
By their nature, headless CMSes lack full-fledged editorial experience integrated into the front ends to which they serve content. Unless they expose a content editing interface tied to each front end, in-context administration and in-place editing are impossible. In other words, to provide an editorial experience on the front end, that front end must be aware of that content editing interface — hence the necessity of coupling.
Display and layout manipulation is another area that is key to making marketers successful. One of Drupal's key features is the ability to control where content appears in a layout structure. Headless CMSes are unopinionated about display and layout settings. But just like in-place editing and in-context administration, editorial tools that enable this need to be integrated into the front end that faces the end user in order to be useful.
In addition, editors and marketers are particularly concerned about how content will look once it's published. Access to an easy end-to-end preview system, especially for unpublished content, is essential to many editors' workflows. In the headless CMS paradigm, developers have to jump through fairly significant hoops to enable seamless preview, including setting up a new API endpoint or staging environment and deploying a separate version of their application that issues requests against new paths. As a result, I believe seamless preview — without having to tap on a developer's shoulder — is still necessary.
Features like in-place editing, in-context administration, layout manipulation, and seamless but faithful preview are essential building blocks for an optimal editorial experience for content creators and marketers. For some use cases, these drawbacks are totally manageable, especially where an application needs little editorial interaction and is more developer-focused. But for content editors, headless CMSes simply don't offer the toolkits they have come to expect; they fall short where Drupal shines.
Drupal empowers both editors and application developers
This diagram illustrates the differences between a coupled — but headless-enabled — Drupal website and a headless CMS with various front ends receiving content.
All of this isn't to say that headless isn't important. Headless is important, but supporting both headless and traditional approaches is one of the biggest advantages of Drupal. After all, content management systems need to serve content beyond editor-focused websites to single-page applications, native applications, and even emerging devices such as wearables, conversational interfaces, and IoT devices.
Fortunately, the ongoing API-first initiative is actively working to advance existing and new web services efforts that make using Drupal as a content service much easier and more optimal for developers. We're working on making developers of these applications more productive, whether through web services that provide a great developer experience like JSON API and GraphQL or through tooling that accelerates headless application development like the Waterwheel ecosystem.
For me, the key takeaway of this discussion is: Drupal is great for both editors and developers. But there are some caveats. For web experiences that need significant focus on the editor or assembler experience, you should use a coupled Drupal front end which gives you the ability to edit and manipulate the front end without involving a developer. For web experiences where you don't need editors to be involved, Drupal is still ideal. In an API-first approach, Drupal provides for other digital experiences that it can't explicitly support (those that aren't web-based). This keeps both options open to you.
Drupal for your site, headless Drupal for your apps
This diagram illustrates the ideal architecture for Drupal, which should be leveraged as both a front end in and of itself as well as a content service for other front ends.
In this day and age, having all channels served by a single source of truth for content is important. But what architecture is optimal for this approach? While reading this you might have also experienced some déjà-vu from a blog post I wrote last year about how you should decouple Drupal, which is still solid advice nearly a year after I first posted it.
Ultimately, I recommend an architecture where Drupal is simultaneously coupled and decoupled; in short, Drupal shines when it's positioned both for editors and for application developers, because Drupal is great at both roles. In other words, your content repository should also be your public-facing website — a contiguous site with full editorial capabilities. At the same time, it should be the centerpiece for your collection of applications, which don't necessitate editorial tools but do offer your developers the experience they want. Keeping Drupal as a coupled website, while concurrently adding decoupled applications, isn't a limitation; it's an enhancement.
Conclusion
Today's goal isn't to make Drupal API-only, but rather API-first. It doesn't limit you to a coupled approach like CMSes without APIs, and it doesn't limit you to an API-only approach like Contentful and other headless CMSes. To me, that is the most important conclusion to draw from this: Drupal supports an entire spectrum of possibilities. This allows you to make the proper trade-off between optimizing for your editors and marketers, or for your developers, and to shift elsewhere on that spectrum as your needs change.
It's a spectrum that encompasses both extremes of the scenarios that a coupled approach and headless approach represent. You can use Drupal to power a single website as we have for many years. At the same time, you can use Drupal to power a long list of applications beyond a traditional website. In doing so, Drupal can be adjusted up and down along this spectrum according to the requirements of your developers and editors.
In other words, Drupal is API-first, not API-only, and rather than leave editors and marketers behind in favor of developers, it gives everyone what they need in one single package.
Le sujet de cette séance : Développer avec la stack MEAN
Thématique : Développement
Public : Développeurs|étudiants|fondateurs de startups
Les animateurs conférenciers : Fabian Vilers (Dev One) et Sylvain Guérin (Guess Engineering)
Lieu de cette séance : Université de Mons, Campus Plaine de Nimy, avenue Maistriau, Grands Amphithéâtres, Auditoire Curie (cf. ce plan sur le site de l’UMONS, ou la carte OSM).
La participation sera gratuite et ne nécessitera que votre inscription nominative, de préférence préalable, ou à l’entrée de la séance. Merci d’indiquer votre intention en vous inscrivant via la page http://jeudisdulibre.fikket.com/. La séance sera suivie d’un verre de l’amitié.
Si vous êtes intéressé(e) par ce cycle mensuel, n’hésitez pas à consulter l’agenda et à vous inscrire sur la liste de diffusion afin de recevoir systématiquement les annonces.
Pour rappel, les Jeudis du Libre se veulent des espaces d’échanges autour de thématiques des Logiciels Libres. Les rencontres montoises se déroulent chaque troisième jeudi du mois, et sont organisées dans des locaux et en collaboration avec des Hautes Écoles et Facultés Universitaires montoises impliquées dans les formations d’informaticiens (UMONS, HEH et Condorcet), et avec le concours de l’A.S.B.L. LoLiGrUB, active dans la promotion des logiciels libres.
Description : MEAN est une stack JavaScript complète, basée sur des logiciels ouverts, et permettant de développer rapidement des applications orientées web. Elle est composée de 4 piliers fondamentaux :
MongoDB (base de données NoSQL orientée documents)
Express (framework de développement d’application back-end)
Angular (framework de développement d’application front-end)
Nodejs (environnement d’exécution multi-plateforme basé sur le moteur V8 JavaScript de Google).
Lors de cette session, nous présenterons à tour de rôle chaque élément constituant une application MEAN tout en construisant pas à pas, en live, et avec vous, une application simple et fonctionnelle.
Short Bios :
Développeur depuis près de 17 ans, Fabian se décrit comme un artisan du logiciel. Il montre beaucoup d’intérêt pour la précision, la qualité et la minutie lors de la conception et la mise au point d’un logiciel. Depuis 2010, il est consultant au sein de Dev One pour laquelle il renforce, guide, et forme des équipes de développement tout en promouvant les bonnes pratiques du développement logiciel en Belgique et en France. Vous pouvez suivre Fabian sur Twitter et LinkedIn.
Sylvain accompagne depuis plus de 20 des sociétés dans la réalisation de leurs projets informatiques. Témoin actif de la révolution numérique et porteur de la bonne parole agile, il tentera de vous faire voir les choses sous un angle différent. Vous pouvez suivre Sylvain sur Twitter et LinkedIn.
Here is my quick wrap-up of the FIRST Technical Colloquium hosted by Cisco in Amsterdam. This is my first participation to a FIRST event. FIRST is an organization helping in incident response as stated on their website:
FIRST is a premier organization and recognized global leader in incident response. Membership in FIRST enables incident response teams to more effectively respond to security incidents by providing access to best practices, tools, and trusted communication with member teams.
The event was organized at Cisco office. Monday was dedicated to a training about incident response and the two next days were dedicated to presentations. All of them focussing on the defence side (“blue team”). Here are a few notes about interesting stuff that I learned.
The first day started with two guys from Facebook: Eric Water @ Matt Moren. They presented the solution developed internally at Facebook to solve the problem of capturing network traffic: “PCAP don’t scale”. In fact, with their solution, it scales! To investigate incidents, PCAPs are often the gold mine. They contain many IOC’s but they also introduce challenges: the disk space, the retention policy, the growing network throughput. When vendors’ solutions don’t fit, it’s time to built your own solution. Ok, only big organizations like Facebook have resources to do this but it’s quite fun. The solution they developed can be seen as a service: “PCAP as a Service”. They started by building the right hardware for sensors and added a cool software layer on top of it. Once collected, interesting PCAPs are analyzed using the Cloudshark service. They explained how they reached top performances by mixing NFS and their GlusterFS solution. Really a cool solution if you have multi-gigabits networks to tap!
The next presentation focused on “internal network monitoring and anomaly detection through host clustering” by Thomas Atterna from TNO. The idea behind this talk was to explain how to monitor also internal traffic. Indeed, in many cases, organizations still focus on the perimeter but internal traffic is also important. We can detect proxies, rogue servers, C2, people trying to pivot, etc. The talk explained how to build clusters of hosts. A cluster of hosts is a group of devices that have the same behaviour like mail servers, database servers, … Then to determine “normal” behaviour per cluster and observe when individual hosts deviate. Clusters are based on the behaviour (the amount of traffic, the number of flows, protocols, …). The model is useful when your network is quite close and stable but much more difficult to implement in an “open” environment (like universities networks).
Then Davide Carnali made a nice review of the Nigerian cybercrime landscape. He explained in details how they prepare their attacks, how they steal credentials, how they deploy the attacking platform (RDP, RAT, VPN, etc). The second part was a step-by-step explanation how they abuse companies to steal (sometimes a lot!) of money. An interesting fact reported by Davide: the time required between the compromisation of a new host (to drop malicious payload) and the generation of new maldocs pointing to this host is only… 3 hours!
The next presentation was performed by Gal Bitensky ( Minerva): “Vaccination: An Anti-Honeypot Approach”. Gal (re-)explained what the purpose of a honeypot and how they can be defeated. Then, he presented a nice review of ways used by attackers to detect sandboxes. Basically, when a malware detects something “suspicious” (read: which makes it think that it is running in a sandbox), it will just silently exit. Gal had the idea to create a script which creates plenty of artefacts on a Windows system to defeat malware. His tool has been released here.
Paul Alderson (FireEye) presented “Injection without needles: A detailed look at the data being injected into our web browsers”. Basically, it was a huge review of 18 months of web-inject and other configuration data gathered from several botnets. Nothing really exciting.
The next talk was more interesting… Back to the roots: SWITCH presented their DNS Firewall solution. This is a service they provide not to their members. It is based on DNS RPZ. The idea was to provide the following features:
Prevention
Detection
Awareness
Indeed, when a DNS request is blocked, the user is redirected to a landing page which gives more details about the problem. Note that this can have a collateral issue like blocking a complete domain (and not only specific URLs). This is a great security control to deploy. Note that RPZ support is implemented in many solutions, especially Bind 9.
Finally, the first day ended with a presentation by Tatsuya Ihica from Recruit CSIRT: “Let your CSIRT do malware analysis”. It was a complete review of the platform that they deployed to perform more efficient automatic malware analysis. The project is based on Cuckoo that was heavily modified to match their new requirements.
The second day started with an introduction to the FIRST organization made by Aaron Kaplan, one of the board members. I liked the quote given by Aaron:
If country A does not talk to country B because of ‘cyber’, then a criminal can hide in two countries
Then, the first talk was really interesting: Chris Hall presented “Intelligence Collection Techniques“. After explaining the different sources where intelligence can be collected (open sources, sinkholes, …), he reviewed a serie of tools that he developed to help in the automation of these tasks. His tools addresses:
Using the Google API, VT API
Paste websites (like pastebin.com)
YARA rules
DNS typosquatting
Whois queries
All the tools are available here. A very nice talk with tips & tricks that you can use immediately in your organization.
The next talk was presented by a Cisco guy, Sunil Amin: “Security Analytics with Network Flows”. Netflow isn’t a new technology. Initially developed by Cisco, they are today a lot of version and forks. Based on the definition of a “flow”: “A layer 3 IP communication between two endpoints during some time period”, we got a review the Netflow. Netflow is valuable to increase the visibility of what’s happening on your networks but it has also some specific points that must be addressed before performing analysis. ex: de-duplication flows. They are many use cases where net flows are useful:
Discover RFC1918 address space
Discover internal services
Look for blacklisted services
Reveal reconnaissance
Bad behaviours
Compromised hosts, pivot
HTTP connection to external host
SSH reverse shell
Port scanning port 445 / 139
I would expect a real case where net flow was used to discover something juicy. The talk ended with a review of tools available to process net flow data: SiLK, nfdump, ntop but log management can also be used like the ELK stack or Apache Spot. Nothing really new but a good reminder.
Then, Joel Snape from BT presented “Discovering (and fixing!) vulnerable systems at scale“. BT, as a major player on the Internet, is facing many issues with compromized hosts (from customers to its own resources). Joel explained the workflow and tools they deployed to help in this huge task. It is based on the following circle: Introduction, data collection, exploration and remediation (the hardest part!).
I like the description of their “CERT dropbox” which can be deployed at any place on the network to perform the following tasks:
Telemetry collection
Data exfiltration
Network exploration
Vulnerability/discovery scanning
An interesting remark from the audience: ISP don’t have only to protect their customers from the wild Internet but also the Internet from their (bad) customers!
Feike Hacqueboard, from TrendMicro, explained: “How political motivated threat actors attack“. He reviewed some famous stories of compromised organizations (like the French channel TV5) then reviewed the activity of some interesting groups like C-Major or Pawn Storm. A nice review of the Yahoo! OAuth abuse was performed as well as the tab-nabbing attack against OWA services.
Jose Enrique Hernandez (Zenedge) presented “Lessons learned in fighting Targeted Bot Attacks“. After a quick review of what bots are (they are not always malicious – think about the Google crawler bot), he reviewed different techniques to protect web resources from bots and why they often fail, like the JavaScript challenge or the Cloudflare bypass. These are “silent challenges”. Loud challenges are, by examples, CAPTCHA’s. Then Jose explained how to build a good solution to protect your resources:
You need a reverse proxy (to be able to change quests on the fly)
LUA hooks
State db for concurrency
Load balancer for scalability
fingerprintjs2 / JS Challenge
Finally, two other Cisco guys, Steve McKinney & Eddie Allan presented “Leveraging Event Streaming and Large Scale Analysis to Protect Cisco“. CIsco is collecting a huge amount of data on a daily basis (they speak in Terabytes!). As a Splunk user, they are facing an issue with the indexing licence. To index all these data, they should have extra licenses (and pay a lot of money). They explained how to “pre-process” the data before sending them to Splunk to reduce the noise and the amount of data to index.
The idea is to pub a “black box” between the collectors and Splunk. They explained what’s in this black box with some use cases:
WSA logs (350M+ events / day)
Passive DNS (7.5TB / day)
Users identification
osquery data
Some useful tips that gave and that are valid for any log management platform:
Don’t assume your data is well-formed and complete
Don’t assume your data is always flowing
Don’t collect all the things at once
Share!
Two intense days full of useful information and tips to better defend your networks and/or collect intelligence. The slides should be published soon.
Like many developers, I get a broken build on regular basis. When setting up a new project this happens a lot because of missing infrastructure dependencies or not having access to the companies private docker hub …
Anyway on the old Jenkins (1), I used to go into the workspace folder to have a look at the build itself. In Jenkins 2 this seemed to have disappeared (when using the pipeline) … or did it … ?
It’s there, but you got to look very careful. To get there follow these simple steps:
Go to the failing build (the #…)
Click on “Pipeline Steps” in the sidebar
Click on “Allocate node : …”
On the right side the “Workspace” folder icon that we all love appears
Parfois, au milieu du mépris de la cohue humaine, il parvenait à croiser un regard fuyant, à attirer une attention concentrée sur un téléphone, à briser pour quelques secondes le dédain empli de stress et d’angoisse des navetteurs préssés. Mais les rares réponses à son geste étaient invariables : — Non ! — Merci, non. (accompagné d’un pincement des lèvres et d’un hochement de tête) — Pas le temps ! — Pas de monnaie…
Il ne demandait pourtant pas d’argent ! Il ne demandait rien en échange de ses roses rouges. Sauf peut-être un sourire.
Pris d’une impulsion instinctive, il était descendu ce matin dans le métro, décidé à offrir un peu de gentillesse, un peu de bonheur sous forme d’un bouquet de fleur destiné au premier inconnu qui l’accepterait.
Alors que la nuée humaine peu à peu s’égayait et se dispersait dans les grands immeubles gris du quartier des affaires, il regarda tristement son bouquet. — J’aurais essayé, murmura-t-il avant de confier les fleurs à la poubelle, cynique vase de métal.
Une larme perla au coin de sa paupière. Il l’effaça du revers de la main avant de s’asseoir à même les marches de béton. Il ferma les yeux, forçant son cœur à s’arrêter. — Monsieur ! Monsieur !
Une main lui secouait l’épaule. Devant son regard fatigué se tenait un jeune agent de police, l’uniforme rutilant, la coupe de cheveux nette et fringuante. — Monsieur, je vous ai observé avec votre bouquet de fleur… — Oui ? fit-il, emplit d’espoir et de reconnaissance. — Puis-je voir votre permis de colportage dans le métro ? Si vous n’en possédez pas, je serai obligé de vous verbaliser.
Last week, 3,271 people gathered at DrupalCon Baltimore to share ideas, to connect with friends and colleagues, and to collaborate on both code and community. It was a great event. One of my biggest takeaways from DrupalCon Baltimore is that Drupal 8's momentum is picking up more and more steam. There are now about 15,000 Drupal 8 sites launching every month.
The second half of my keynote highlighted the newest improvements to Drupal 8.3, which was released less than a month ago. I showcased how an organization like The Louvre could use Drupal 8 to take advantage of new or improved site builder (layouts video, workflow video), content author (authoring video) and end user (BigPipe video, chatbot video) features.
Finally, it was really rewarding to spotlight several Drupalists that have made an incredible impact on Drupal. If you are interested in viewing each spotlight, they are now available on my YouTube channel.
Keith Jay; designer for the Out-of-the-Box initiative
Thanks to all who made DrupalCon Baltimore a truly amazing event. Every year, DrupalCon allows the Drupal community to come together to re-energize, collaborate and celebrate. Discussions on evolving Drupal's Code of Conduct and community governance were held and will continue to take place virtually after DrupalCon. If you have not yet had the chance, I encourage you to participate.
Since Nginx 1.13, support has been added for TLSv1.3, the latest version of the TLS protocol. Depending on when you read this post, chances are you're running an older version of Nginx at the moment, which doesn't yet support TLS 1.3. In that case, consider running Nginx in a container for the latest version, or compiling Nginx from source.
Enable TLSv1.3 in Nginx
I'm going to assume you already have a working TLS configuration. It'll include configs like these;
TLSv1.3 draft-19 support is in master if you "config" with "enable-tls1_3". Note that draft-19 is not compatible with draft-18 which is still be used by some other libraries. Our draft-18 version is in the tls1.3-draft-18 branch. TLS 1.3 support in OpenSSL
In short; yes, you can enable TLS 1.3 in Nginx, but I haven't found an OS & library that will allow me to actually use TLS 1.3.
Today, while hunting, I found a malicious HTML page in my spam trap. The page was a fake JP Morgan Chase bank. Nothing fancy. When I found such material, I usually search for “POST” HTTP requests to collect URLs and visit the websites that receive the victim’s data. As usual, the website was not properly protected and all files were readable. This one looked interesting:
The first question was: are those data relevant. Probably not… Why?
Today, many attackers protect their malicious website via an .htaccess file to restrict access to their victims only. In this case, the Chase bank being based in the US, we could expect that most of the visitors’ IP addresses to be geolocalized there but it was not the case this time. I downloaded the data file that contained 503 records. Indeed, most of them contained empty or irrelevant information. So I decided to have a look at the IP addresses. Who’s visiting the phishing site? Let’s generate some statistics!
With Splunk, we can easily display them on a fancy map:
| inputlookup victims.csv | iplocation IP \
| inputlookup victims.csv | iplocation IP \
| stats count by IP, lat, lon, City, Country, Region
Here is the top-5 of countries which visited the phishing page or, more precisely, which submitted a POST request:
United States
64
United Kingdom
13
France
11
Germany
7
Australia
5
Some IP addresses visited multiple times the website:
37.187.173.11
187
51.15.46.11
77
219.117.238.170
21
87.249.110.180
9
95.85.8.153
6
A reverse lookup on the IP addresses revealed some interesting information:
The Google App Engine was the top visitor
Many VPS providers visited the page, probably owned by researchers (OVH, Amazon EC2)
Service protecting against phishing sites visited the page (ex: phishtank.com, phishmongers.com, isitphishing.org)
Many Tor exit-nodes
Some online URL scanners (urlscan.io)
Some CERTS (CIRCL)
Two nice names were found:
Trendmicro
Trustwave
No real victim left his/her data on the fake website. Some records contained data but fake ones (although probably entered manually). All the traffic was generated by crawlers, bot and security tools…
I published the following diary on isc.sans.org: “HTTP Headers… the Achilles’ heel of many applications“.
When browsing a target web application, a pentester is looking for all “entry” or “injection” points present in the pages. Everybody knows that a static website with pure HTML code is less juicy compared to a website with many forms and gadgets where visitors may interact with it. Classic vulnerabilities (XSS, SQLi) are based on the user input that is abused to send unexpected data to the server… [Read more]
I published the following diary on isc.sans.org: “The story of the CFO and CEO…“.
I read an interesting article in a Belgian IT magazine[1]. Every year, they organise a big survey to collect feelings from people working in the IT field (not only security). It is very broad and covers their salary, work environments, expectations, etc. For infosec people, one of the key points was that people wanted to attend more trainings and conferences… [Read more]