In this case there will be support for Dutch and English. PartialEq is derived to be able to compare Lang items with ==.
The Default trait is implemented to define the default language:
The FromStr trait is implemented to allow creating a Lang item from a string.
The Into<&'static str> trait is added to allow the conversion in the other direction.
Finally the FromRequest trait is implemented to allow extracting the "lang" cookie from the request.
It always succeeds and falls back to the default when no cookie or an unknown language is is found. How to use the Lang constraint on a request:
And the language switch page:
And as a cherry on the pie, let's have the language switch page automatically redirect to the referrer. First let's implement a FromRequest trait for our own Referer type:
When it finds a Referer header it uses the content, else the request is forwarded to the next handler. This means that if the request has no Referer header it is not handled, and a 404 will be returned. Finally let's update the language switch request handler:
Pretty elegant. A recap with all code combined and adding the missing glue:
In February we spent a weekend in the Arctic Circle hoping to see the northern lights. I've been so busy, I only now got around to writing about it.
We decided to travel to Nellim for an action-packed weekend with outdoor adventure, wood fires, reindeer and no WiFi. Nellim, is a small Finnish village, close to the Russian border and in the middle of nowhere. This place is a true winter wonderland with untouched and natural forests. On our way to the property we saw a wild reindeer eating on the side of the road. It was all very magical.
The trip was my gift to Vanessa for her 40th birthday! I reserved a private, small log cabin instead of the main lodge. The log cabin itself was really nice; even the bed was made of logs with two bear heads were carved into it. Vanessa called them Charcoal and Smokey. To stay warm we made fires and enjoyed our sauna.
One day we went dog sledding. As with all animals it seems, Vanessa quickly named them all; Marshmallow, Brownie, Snickers, Midnight, Blondie and Foxy. The dogs were so excited to run! After 3 hours of dog sledding in -30 C (-22 F) weather we stopped to warm up and eat; we made salmon soup in a small make-shift shelter that was similar to a tepee. The tepee had a small opening at the top and there was no heat or electricity.
The salmon soup was made over a fire, and we were skeptical at first how this would taste. The soup turned out to be delicious and even reminded us of the clam chowder that we have come to enjoy in Boston. We've since remade this soup at home and the boys also enjoy it. Not that this blog will turn into a recipe blog, but I plan to publish the recipe with photos at some point.
At night we would go out on "aurora hunts". The first night by reindeer sled, the second night using snowshoes, and the third night by snowmobile. To stay warm, we built fires either in tepees or in the snow and drank warm berry juice.
While the untouched land is beautiful, they definitely try to live off the land. The Fins have an abundance of berries, mushrooms, reindeer and fish. We gladly admit we enjoyed our reindeer sled rides, as well as eating reindeer. We had fresh mushroom soup made out of hand-picked mushrooms. And every evening there was an abundance of fresh fish and reindeer offered for dinner. We also discovered a new gin, Napue, made from cranberries and birch leaves.
In the end, we didn't see the Northern Lights. We had a great trip, and seeing them would have been the icing on the cake. It just means that we'll have to come back another time.
This little upgrade caught me by surprise. In a MaxScale 2.0 to 2.1 upgrade, MaxScale changes the default bind address from IPv4 to IPv6. It's mentioned in the release notes as this;
MaxScale 2.1.2 added support for IPv6 addresses. The default interface that listeners bind to was changed from the IPv4 address 0.0.0.0 to the IPv6 address ::. To bind to the old IPv4 address, add address=0.0.0.0 to the listener definition.
The result is pretty significant though, because authentication in MySQL is often host or IP based, with permissions being granted like this.
$ SET PASSWORD FOR 'xxx'@'10.0.0.1' = PASSWORD('your_password');
Notice the explicit use of IP address there.
Now, after a MariaDB 2.1 upgrade, it'll default to an IPv6 address for authentication, which gives you the following error message;
$ mysql -h127.0.0.1 -P 3306 -uxxx -pyour_password
ERROR 1045 (28000): Access denied for user 'xxx'@'::ffff:127.0.0.1' (using password: YES)
Notice how 127.0.0.1 turned into ::ffff:127.0.0.1? That's an IPv4 address being encapsulated in an IPv6 address. And it'll cause MySQL authentication to potentially fail, depending on how you assigned your users & permissions.
Heads-up: Autoptimize 2.2 has just been released with a slew of new features (see changelog) and an important security-fix. Do upgrade as soon as possible.
If you prefer not to upgrade to 2.2 (because you prefer the stability of 2.1.0), you can instead download 2.1.1, which is identical to 2.1.0 except that the security fix has been backported.
I’ll follow up on the new features and on the security issue in more detail later today/ tomorrow.
The second edition of BSides Athens was planned this Saturday. I already attended the first edition (my wrap-up is here) and I was happy to be accepted as a speaker for the second time! This edition moved to a new location which was great. Good wireless, air conditioning and food. The day was based on three tracks: the first two for regular talks and the third one for the CTP and workshops. The “boss”, Grigorios Fragkos introduced the 2nd edition. This one gave more attention to a charity program called “the smile of the child” which helps Greek kids to remain in touch with the new technologies. A specific project is called “ODYSSEAS” and is based on a truck that travels across Greek to educate kids to technologies like mobile phones, social networks, … The BSides Athens donated to this project. A very nice initiative that was presented by Stefanos Alevizos who received a slot of a few minutes to describe the program (content in Greek only).
The keynote was assigned to Dave Lewis who presented “The Unbearable Lightness of Failure”. The main fact explained by Dave is that we fail but…we learn from our mistakes! In other words, “failure is an acceptable teaching tool“. The keynote was based on many facts like signs. We receive signs everywhere and we must understand how to interpret them or the famous Friedrich Nietzsche’s quote: “That which does not kill us makes us stronger“. We are facing failures all the time. The last good example is the Wannacry bad story which should never happen but… You know the story! Another important message is that we don’t have to be afraid t fail. We also have to share as much as possible not only good stories but also bad stories. Sharing is a key! Participate in blogs, social networks, podcasts. Break out of your silo! Dave is a renowned speaker and delivered a really good keynote!
Then talks were split across the two main rooms. For the first one, I decided to attend the Thanissis Diogos’s presentation about “Operation Grand Mars“. In January 20167, Trustwave published an article which described this attack. Thanassis came back on this story with more details. After a quick recap about what is incident management, he reviewed all the fact related to the operation and gave some tips to improve abnormal activities on your network. It started with an alert generated by a workstation and, three days later, the same message came from a domain controller. Definitively not good! The entry point was infected via a malicious Word document / Javascript. Then a payload was download from Google docs which is, for most of our organization, a trustworthy service. Then he explained how persistence was achieved (via autorun, scheduled tasks) and also lateral movements. The pass-the-hash attack was used. Another tip from Thanissis: if you see local admin accounts used for network logon, this is definitively suspicious! Good review of the attack with some good tips for blue teams.
My next choice was to move to the second track to follow Konstantinos Kosmidis‘s talk about machine learning (a hot topic today in many conferences!). I’m not a big fan of these technologies but I was interested in the abstract. The talk was a classic one: after an introduction to machine learning (that we already use every day with technologies like the Google face recognition, self-driving card or voice-recognition), why not apply this technique to malware detection. The goal is to: detect, classify but, more important, to improve the algorithm! After reviewing some pro & con, Konstantinos explained the technique he used in his research to convert malware samples into images. But, more interesting, he explained a technique based on steganography to attack this algorithm. The speaker was a little bit stressed but the idea looks interesting. If you’re interested, have a look at his Github repository.
Back to the first track to follow Professor Andrew Blyth with “The Role of Professionalism and Standards in Penetration Testing“. The penetration testing landscape changed considerably in the last years. We switched to script kiddies search for juicy vulnerabilities to professional services. The problem is that today some pentest projects are required not to detect security issues and improve but just for … compliance requirements. You know the “checked-case” syndrome. Also, the business evolves and is requesting more insurance. The coming GDP European regulation will increase the demand in penetration tests. But, a real pentest is not a Nessus scan with a new logo as explained Andrew! We need professionalism. In the second part of the talk, Andrew reviewed some standards that involve pentests: iCAST, CBEST, PCI, OWASP, OSSTMM.
After a nice lunch with Greek food, back to talks with the one of Andreas Ntakas and Emmanouil Gavriil about “Detecting and Deceiving the Unknown with Illicium”. They are working for one of the sponsors and presented the tool developed by their company: Illicium. After the introduction, my feeling was that it’s a new honeypot with extended features. Not only, they are interesting stuff but, IMHO, it was a commercial presentation. I’d expect a demo. Also, the tool looks nice but is dedicated to organization that already reached a mature security level. Indeed, before defeating the attacker, the first step is to properly implement basic controls like… patching! What some organizations still don’t do today!
The next presentation was “I Thought I Saw a |-|4><0.-” by Thomas V. Fisher. Many interesting tips were provided by Thomas like:
Understand and define “normal” activities on your network to better detect what is “abnormal”.
Log everything!
Know your business
Keep in mind that the classic cyber kill-chain is not always followed by attackers (they don’t follow rules)
The danger is to try to detect malicious stuff based on… assumptions!
The model presented by Thomas was based on 4 A’s: Assess, Analyze, Articulate and Adapt! A very nice talk with plenty of tips!
The next slot was assigned to Ioannis Stais who presented his framework called LightBulb. The idea is to build a framework to help in bypassing common WAF’s (web application firewalls). Ioannis explained first how common WAF’s are working and why they could be bypassed. Instead of testing all possible combinations (brute-force), LightBuld relies on the following process:
Formalize the knowledge in code injection attacks variations.
Expand the knowledge
Cross check for vulnerabilities
Note that LightBulb is available also as a BurpSuipe extension! The code is available here.
Then, Anna Stylianou presented “Car hacking – a real security threat or a media hype?“. The last events that I attended also had a talk about cars but they focused more on abusing the remote control to open doors. Today, it focuses on ECU (“Engine Control Unit”) that are present in modern cars. Today a car might have >100 ECU’s and >100 millions lines of code which means a great attack surface! They are many tools available to attack a car via its CAN bus, even the Metasploit framework can be used to pentest cars today! The talk was not dedicated to a specific attack or tools but was more a recap of the risks that cars manufacturers are facing today. Indeed, threats changed:
theft from the car (breaking a window)
theft of the cat
but today: theft the use of the car (ransomware)
Some infosec gurus also predict that autonomous cars will be used as lethal weapons! As cars can be seen as computers on wheels, the potential attacks are the same: spoofing, tampering, repudiation, disclosure, DoS or privilege escalation issues.
The next slot was assigned to me. I presented “Unity Makes Strength” and explained how to improve interconnections between our security tools/applications. The last talk was performed by Theo Papadopoulos: A “Shortcut” to Red Teaming. He explained how .LNK files can be a nice way to compromize your victim’s computer. I like the “love equation”: Word + Powershell = Love. Step by step, Theo explained how to build a malicious document with a link file, how to avoid mistakes and how to increase chances to get the victim infected. I like the persistence method based on assigning a popular hot-key (like CTRL-V) to shortcut on the desktop. Windows will trigger the malicious script attached to the shortcut and them… execute it (in this case, paste the clipboard content). Evil!
The day ended with the CTF winners announce and many information about the next edition of BSides Athens. They already have plenty of ideas! It’s now time for some off-days across Greece with the family…
This week marked Acquia's 10th anniversary. In 2007, Jay Batson and I set out to build a software company based on open source and Drupal that we would come to call Acquia. In honor of our tenth anniversary, I wanted to share some of the milestones and lessons that have helped shape Acquia into the company it is today. I haven't shared these details before so I hope that my record of Acquia's founding not only pays homage to our incredible colleagues, customers and partners that have made this journey worthwhile, but that it offers honest insight into the challenges and rewards of building a company from the ground up. If you like this story, I also encourage you to read Jay's side of story.
A Red Hat for Drupal
In 2007, I was attending the University of Ghent working on my PhD dissertation. At the same time, Drupal was gaining momentum; I will never forget when MTV called me seeking support for their new Drupal site. I remember being amazed that a brand like MTV, an institution I had grown up with, had selected Drupal for their website. I was determined to make Drupal successful and helped MTV free of charge.
It became clear that for Drupal to grow, it needed a company focused on helping large organizations like MTV be successful with the software. A "Red Hat for Drupal", as it were. I also noticed that other open source projects, such as Linux had benefitted from well-capitalized backers like Red Hat and IBM. While I knew I wanted to start such a company, I had not yet figured out how. I wanted to complete my PhD first before pursuing business. Due to the limited time and resources afforded to a graduate student, Drupal remained a hobby.
Little did I know that at the same time, over 3,000 miles away, Jay Batson was skimming through a WWII Navajo Code Talker Dictionary. Jay was stationed as an Entrepreneur in Residence at North Bridge Venture Partners, a venture capital firm based in Boston. Passionate about open source, Jay realized there was an opportunity to build a company that provided customers with the services necessary to scale and succeed with open source software. We were fortunate that Michael Skok, a Venture Partner at North Bridge and Jay's sponsor, was working closely with Jay to evaluate hundreds of open source software projects. In the end, Jay narrowed his efforts to Drupal and Apache Solr.
If you're curious as to how the Navajo Code Talker Dictionary fits into all of this, it's how Jay stumbled upon the name Acquia. Roughly translating as "to spot or locate", Acquia was the closest concept in the dictionary that reinforced the ideals of information and content that are intrinsic to Drupal (it also didn't hurt that the letter A would rank first in alphabetical listings). Finally, the similarity to the world "Aqua" paid homage to the Drupal Drop; this would eventually provide direction for Acquia's logo.
Breakfast in Sunnyvale
In March of 2007, I flew from Belgium to California to attend Yahoo's Open Source CMS Summit, where I also helped host DrupalCon Sunnyvale. It was at DrupalCon Sunnyvale where Jay first introduced himself to me. He explained that he was interested in building a company that could provide enterprise organizations supplementary services and support for a number of open source projects, including Drupal and Apache Solr. Initially, I was hesitant to meet with Jay. I was focused on getting Drupal 5 released, and I wasn't ready to start a company until I finished my PhD. Eventually I agreed to breakfast.
Over a baguette and jelly, I discovered that there was overlap between Jay's ideas and my desire to start a "Red Hat for Drupal". While I wasn't convinced that it made sense to bring Apache Solr into the equation, I liked that Jay believed in open source and that he recognized that open source projects were more likely to make a big impact when they were supported by companies that had strong commercial backing.
We spent the next few months talking about a vision for the business, eliminated Apache Solr from the plan, talked about how we could elevate the Drupal community, and how we would make money. In many ways, finding a business partner is like dating. You have to get to know each other, build trust, and see if there is a match; it's a process that doesn't happen overnight.
On June 25th, 2007, Jay filed the paperwork to incorporate Acquia and officially register the company name. We had no prospective customers, no employees, and no formal product to sell. In the summer of 2007, we received a convertible note from North Bridge. This initial seed investment gave us the capital to create a business plan, travel to pitch to other investors, and hire our first employees. Since meeting Jay in Sunnyvale, I had gotten to know Michael Skok who also became an influential mentor for me.
Jay and me on one of our early fundraising trips to San Francisco.
Throughout this period, I remained hesitant about committing to Acquia as I was devoted to completing my PhD. Eventually, Jay and Michael convinced me to get on board while finishing my PhD, rather than doing things sequentially.
Acquia, my Drupal startup
Soon thereafter, Acquia received a Series A term sheet from North Bridge, with Michael Skok leading the investment. We also selected Sigma Partners and Tim O'Reilly's OATV from all of the interested funds as co-investors with North Bridge; Tim had become both a friend and an advisor to me.
In many ways we were an unusual startup. Acquia itself didn't have a product to sell when we received our Series A funding. We knew our product would likely be support for Drupal, and evolve into an Acquia-equivalent of the Red Hat Network. However, neither of those things existed, and we were raising money purely on a PowerPoint deck. North Bridge, Sigma and OATV mostly invested in Jay and I, and the belief that Drupal could become a billion dollar company that would disrupt the web content management market. I'm incredibly thankful for Jay, North Bridge, Sigma and OATV for making a huge bet on me.
Receiving our Series A funding was an incredible vote of confidence in Drupal, but it was also a milestone with lots of mixed emotions. We had raised $7 million, which is not a trivial amount. While I was excited, it was also a big step into the unknown. I was convinced that Acquia would be good for Drupal and open source, but I also understood that this would have a transformative impact on my life. In the end, I felt comfortable making the jump because I found strong mentors to help translate my vision for Drupal into a business plan; Jay and Michael's tenure as entrepreneurs and business builders complimented my technical strength and enabled me to fine-tune my own business building skills.
In November 2007, we officially announced Acquia to the world. We weren't ready but a reporter had caught wind of our stealth startup, and forced us to unveil Acquia's existence to the Drupal community with only 24 hours notice. We scrambled and worked through the night on a blog post. Reactions were mixed, but generally very supportive. I shared in that first post my hopes that Acquia would accomplish two things: (i) form a company that supported me in providing leadership to the Drupal community and achieving my vision for Drupal and (ii) establish a company that would be to Drupal what Ubuntu or Red Hat were to Linux.
An early version of Acquia.com, with our original logo and tagline. March 2008.
The importance of enduring values
It was at an offsite in late 2007 where we determined our corporate values. I'm proud to say that we've held true to those values that were scribbled onto our whiteboard 10 years ago. The leading tenant of our mission was to build a company that would "empower everyone to rapidly assemble killer websites".
In January 2008, we had six people on staff: Gábor Hojtsy (Principal Acquia engineer, Drupal 6 branch maintainer), Kieran Lal (Acquia product manager, key Drupal contributor), Barry Jaspan (Principal Acquia engineer, Drupal core developer) and Jeff Whatcott (Vice President of Marketing). Because I was still living in Belgium at the time, many of our meetings took place screen-to-screen:
Opening our doors for business
We spent a majority of the first year building our first products. Finally, in September of 2008, we officially opened our doors for business. We publicly announced commercial availability of the Acquia Drupal distribution and the Acquia Network. The Acquia Network would offer subscription-based access to commercial support for all of the modules in Acquia Drupal, our free distribution of Drupal. This first product launched closely mirrored the Red Hat business model by prioritizing enterprise support.
We quickly learned that in order to truly embrace Drupal, customers would need support for far more than just Acquia Drupal. In the first week of January 2009, we relaunched our support offering and announced that we would support all things related to Drupal 6, including all modules and themes available on drupal.org as well as custom code.
This was our first major turning point; supporting "everything Drupal" was a big shift at the time. Selling support for Acquia Drupal exclusively was not appealing to customers, however, we were unsure that we could financially sustain support for every Drupal module. As a startup, you have to be open to modifying and revising your plans, and to failing fast. It was a scary transition, but we knew it was the right thing to do.
Building a new business model for open source
Exiting 2008, we had launched Acquia Drupal, the Acquia Network, and had committed to supporting all things Drupal. While we had generated a respectable pipeline for Acquia Network subscriptions, we were not addressing Drupal's biggest adoption challenges; usability and scalability.
In October of 2008, our team gathered for a strategic offsite. Tom Erickson, who was on our board of directors, facilitated the offsite. Red Hat's operational model, which primarily offered support, had laid the foundation for how companies could monetize open source, but we were convinced that the emergence of the cloud gave us a bigger opportunity and helped us address Drupal's adoption challenges. Coming out of that seminal offsite we formalized the ambitious decision to build "Acquia Gardens" and "Acquia Fields". Here is why these two products were so important:
Solving for scalability: In 2008, scaling Drupal was a challenge for many organizations. Drupal scaled well, but the infrastructure companies required to make Drupal scale well was expensive and hard to find. We determined that the best way to help enterprise companies scale was by shifting the paradigm for web hosting from traditional rack models to the then emerging promise of the Cloud.
Solving for usability: In 2008, Wordpress and Ning made it really easy for people to start blogging or to set up a social network. At the time, Drupal didn't encourage this same level of adoption for non-technical audiences. Acquia Gardens was created to offer an easy on-ramp for people to experience the power of Drupal, without worrying about installation, hosting, and upgrading. It was one of the first times we developed an operational model that would offer "Drupal-as-a-service".
Fast forward to today, and Acquia Fields was renamed Acquia Hosting and later Acquia Cloud. Acquia Gardens became Drupal Gardens and later evolved into Acquia Cloud Site Factory. In 2008, this product roadmap to move Drupal into the cloud was a bold move. Today, the Cloud is the starting point for any modern digital architecture. By adopting the Cloud into our product offering, I believe Acquia helped establish a new business model to commercialize open source. Today, I can't think of many open source companies that don't have a cloud offering.
Tom Erickson takes a chance on Acquia
Tom joined Acquia as an advisor and a member of our Board of Directors when Acquia was founded. Since the first time I met Tom, I always wanted him to be an integral part of Acquia. It took some convincing, but Tom eventually agreed to join us full time as our CEO in 2009. Jay Batson, Acquia's founding CEO, continued on as the Vice President at Acquia responsible for incubating new products and partnerships.
Moving from Europe to the United States
In 2010, after spending my entire life in Antwerp, I decided to move to Boston. The move would allow me to be closer to the team. A majority of the company was in Massachusetts, and at the pace we were growing, it was getting harder to help execute our vision all the way from Belgium. I was also hoping to cut down on travel time; in 2009 flew 100,000 miles in just one year (little did I know that come 2016, I'd be flying 250,00 miles!).
This is a challenge that many entrepreneurs face when they commit to starting their own company. Initially, I was only planning on staying on the East Coast for two years. Moving 3,500 miles away from your home town, most of your relatives, and many of your best friends is not an easy choice. However, it was important to increase our chances of success, and relocating to Boston felt essential. My experience of moving to the US had a big impact on my life.
Building the universal platform for the world's greatest digital experiences
Entering 2010, I remember feeling that Acquia was really 3 startups in one; our support business (Acquia Network, which was very similar to Red Hat's business model), our managed cloud hosting business (Acquia Cloud) and Drupal Gardens (a WordPress.com based on Drupal). Welcoming Tom as our CEO would allow us to best execute on this offering, and moving to Boston enabled me to partner with Tom directly. It was during this transformational time that I think we truly transitioned out of our "founding period" and began to emulate the company I know today.
The decisions we made early in the company's life, have proven to be correct. The world has embraced open source and cloud without reservation, and our long-term commitment to this disruptive combination has put us at the right place at the right time. Acquia has grown into a company with over 800 employees around the world; in total, we have 14 offices around the globe, including our headquarters in Boston. We also support an incredible roster of customers, including 16 of the Fortune 100 companies. Our work continues to be endorsed by industry analysts, as we have emerged as a true leader in our market. Over the past ten years I've had the privilege of watching Acquia grow from a small startup to a company that has crossed the chasm.
With a decade behind us, and many lessons learned, we are on the cusp of yet another big shift that is as important as the decision we made to launch Acquia Field and Gardens in 2008. In 2016, I led the project to update Acquia's mission to "build the universal platform for the world's greatest digital experiences". This means expanding our focus, and becoming the leader in building digital customer experiences. Just like I openly shared our roadmap and strategy in 2009, I plan to share our next 10 year plan in the near future. It's time for Acquia to lay down the ambitious foundation that will enable us to be at the forefront of innovation and digital experience in 2027.
A big thank you
Of course, none of these results and milestones would be possible without the hard work of the Acquia team, our customers, partners, the Drupal community, and our many friends. Thank you for all your hard work. After 10 years, I continue to love the work I do at Acquia each day — and that is because of you.
This is definitely a bug, because RHEL7 (and thus, CentOS and other derivatives) do allow usernames that start with a digit. It's systemd's parsing of the User= parameter that determines the naming doesn't follow a set of conventions, and decides to fall back to its default value, root.
Just to prove a point, here's a 0day user on CentOS 7.3.
$ cat /etc/redhat-release
CentOS Linux release 7.3.1611 (Core)
$ useradd 0day
$ su - 0day
$ id
uid=1067(0day) gid=1067(0day) groups=1067(0day)
That user works. The bug is thus in systemd where it doesn't recognize that as a valid username.
Why the big fuss?
If you quickly glance over the bug (and especially the hype-media that loves to blow this up), it can come across as if every username that starts with a digit can automatically get root privileges on any machine that has systemd installed (which, let's be frank, is pretty much every modern Linux distro).
That's not the case. You need a valid systemd Unit file before that could ever happen.
This might be a security issue, but is hard to trigger
So in order to trigger this behaviour, someone with root-level privileges needs to edit a Unit file and enter a "invalid username", in this case one that starts with a digit.
But you need root level privileges to edit the file in the first place and to reload systemd to make use of that Unit file.
So here's the potential security risk;
You could trick a sysadmin into creating such a Unit file, hoping they miss this behaviour and trick your user in becoming root
You need an exploit to grant you write access to systemd's Unit files in order to escalate your privileges further
At this point, I don't think I'm missing another attack vector here.
Should this be fixed?
Yes. It's an obvious bug (at least on RHEL/CentOS 7), since a valid username does not get accepted by systemd so it triggers unexpected behaviour by launching services as root.
However, it isn't as bad as it sounds and does not grant any username with a digit immediate root access.
But it's systemd, so everyone loves to jump on that bandwagon and hype this as much as possible. Here's the deal folks: systemd is software and it has bugs. The Linux kernel also has bugs, but we don't go around blaming Linus for everyone one of those either.
I disabled the comments on this post because I'm not in the mood in yet another systemd debacle where my comment section gets abused for personal threats or violence. If you want to discuss, the post is on HackerNews& /r/linux.
In a blog post on Sunday, Mattias Geniar, a developer based in Belgium, said the issue qualifies as a bug because systemd's parsing of the User= parameter in unit files falls back on root privileges when user names are invalid.
It would be better, Geniar said in an email to The Register, if systemd defaulted to user rather than root privileges. Geniar stressed that while this presents a security concern, it's a not a critical security issue because the attack vectors are limited. Create a user called '0day', get bonus root privs – thanks, Systemd!
My reply to their inquiry was a bit more nuanced than that though, so for the sake of transparency I'll publish my response below.
Lennart Poettering seems disinclined to accept that systemd should
check for invalid names.
I think he's right in that systemd doesn't have to check for invalid names, after all -- those shouldn't even get on the system in the first place. It would be nice if systemd did though, the more validation the better. Any webdeveloper knows he/she shouldn't blindly trust user input, so why should systemd?
In this regard, I think Lennart is absolutely right that systemd should & does try to validate the username.
However, the problem here is that the username "0day" is a legit username, that's being validated as invalid, after which systemd falls back to its system default, root. Arguably, perhaps not a sane default and a non-privileged user would be better.
I wanted to find out why you see the issue with systemd as a security
flaw. How might the ability to create a user with a name like "0day"
be exploited?
if I can be 100% clear upfront: this flaw/bugreport in systemd is most definitely a security issue. However, even though it sounds bad, it's not a critical security issue. Attack vectors are limited, but they exist. My post was mostly aimed at preventing bad press that would interpret this bug as "if your username contains a digit, you can become root".
In order to exploit this, you need;
a username that gets interpreted by systemd as invalid (there are most likely more potential usernames that get interpreted as invalid)
a systemd unit file to launch a script or service
Here's where this is a potential issue;
Shared hosting: systems that allow a username to be chosen by the client, that eventually run PHP, Ruby, ... as that user. On RHEL/CentOS7, those could (1) get started by systemd
Self-service portals that use systemd to manage one-off or recurring tasks with systemd
Any place that allows user input for systemd-managed tasks, think controlpanels like Plesk, DirectAdmin, ... that allow usernames to be chosen for script execution
(1) those implementing shared hosting have a wide variety of ways to implement it though, so no guarantee that it's going to be a unit file with systemd.
In most cases (all?), you need at least access to the system already in one way or another, to try and use this bug as a security vector to get privilege escalation.
I published the following diary on isc.sans.org: “A VBScript with Obfuscated Base64 Data“.
A few months ago, I posted a diary to explain how to search for (malicious) PE files in Base64 data. Base64 is indeed a common way to distribute binary content in an ASCII form. There are plenty of scripts based on this technique. On my Macbook, I’m using a small service created via Automator to automatically decode highlighted Base64 data and submit them to my Viper instance for further analysis… [Read more]
On January 27, 2008, the first RC followed, with boatloads of new features. Over the years, it was ported to Drupal 61, 7 and 8 and gained more features (I effectively added every single feature that was requested — I loved empowering the site builder). I did the same with my Hierarchical Select module.
I was a Computer Science student for the first half of those 9.5 years, and it was super exciting to see people actually use my code on hundreds, thousands and even tens of thousands of sites! In stark contrast with the assignments at university, where the results were graded, then discarded.
Frustration
Unfortunately this approach resulted in feature-rich modules, with complex UIs to configure them, and many, many bug reports and support requests, because they were so brittle and confusing. Rather than making the 80% case simple, I supported 99% of needed features, and made things confusing and complex for 100% of the users.
Main CDN module configuration UI in Drupal 7.
Learning
In my job in Acquia’s Office of the CTO, my job is effectively “make Drupal better & faster”.
All this time (5 years already!), I’ve been helping to build Drupal itself (the system, the APIs, the infrastructure, the overarching architecture), and have seen the long-term consequences from both up close and afar: the concepts required to understand how it all works, the APIs to extend, override and plug in to. In that half decade, I’ve often cursed past commits, including my own!
That’s what led to:
my insistence that the dynamic_page_cache and big_pipe modules in Drupal 8 core do not have a UI, nor any configuration, and rely entirely on existing APIs and metadata to do their thing (with only a handful of bug reports in >18 months!)
I started porting the CDN module to Drupal 8 in March 2016 — a few months after the release of Drupal 8. It is much simpler to use (just look at the UI). It has less overhead (the UI is in a separate module, the altering of file URLs has far simpler logic). It has lower technical complexity (File Conveyor support was dropped, it no longer needs to detect HTTP vs HTTPS: it always uses protocol-relative URLs, less unnecessary configurability, the farfuture functionality no longer tries to generate file and no longer has extremely detailed configurability).
In other words: the CDN module in Drupal 8 is much simpler. And has much better test coverage too. (You can see this in the tarball size too: it’s about half of the Drupal 7 version of the module, despite significantly more test coverage!)
CDNUI module in Drupal 8.
all the fundamentals
the ability to use simple CDN mappings, including conditional ones depending on file extensions, auto-balancing, and complex combinations of all of the above
preconnecting (and DNS prefetching for older browsers)
a simple UI to set it up — in fact, much simpler than before!
changed/improved
the CDN module now always uses protocol-relative URLs, which means there’s no more need to distinguish between HTTP and HTTPS, which simplifies a lot
the UI is now a separate module
the UI is optional: for power users there is a sensible configuration structure with strict config schema validation
complete unit test coverage of the heart of the CDN module, thanks to D8’s improved architecture
preconnecting (and DNS prefetching) using headers rather than tags in , which allows a much simpler/cleaner Symfony response subscriber
tours instead of advanced help, which very often was ignored
there is nothing to configure for the SEO (duplicate content prevention) feature anymore
nor is there anything to configure for the Forever cacheable files feature anymore (named Far Future expiration in Drupal 7), and it’s a lot more robust
all the exceptions (blacklist, whitelist, based on Drupal path, file path…) — all of them are a maintenance/debugging/cacheability nightmare
configurability of SEO feature
configurability of unique file identifiers for the Forever cacheable files feature
testing mode
For very complex mappings, you must manipulate cdn.settings.yml— there’s inline documentation with examples there. Those who need the complex setups don’t mind reading three commented examples in a YAML file. This used to be configurable through the UI, but it also was possible to configure it “incorrectly”, resulting in broken sites — that’s no longer possible.
There’s comprehensive test coverage for everything in the critical path, and basic integration test coverage. Together, they ensure peace of mind, and uncover bugs in the next minor Drupal 8 release: BC breaks are detected early and automatically.
The results after 8 months: contributed module maintainer bliss
The first stable release of the CDN module for Drupal 8 was published on December 2, 2016. Today, I released the first patch release: cdn 8.x-3.1. The change log is tiny: a PHP notice fixed, two minor automated testing infrastructure problems fixed, and two new minor features added.
We can now compare the Drupal 7 and 8 versions of the CDN module:
In other words: maintaining this contributed module now requires pretty much zero effort!
Conclusion
For your own Drupal 8 modules, no matter if they’re contributed or custom, I recommend a few key rules:
Selective feature set.
Comprehensive unit test coverage for critical code paths (UnitTestCase)2 + basic integration test coverage (BrowserTestBase) maximizes confidence while minimizing time spent.
Don’t provide/build APIs (that includes hooks) unless you see a strong use case for it. Prefer coarse over granular APIs unless you’re absolutely certain.
Avoid configurability if possible. Otherwise, use config schemas to your advantage, provide a simple UI for the 80% use case. Leave the rest to contrib/custom modules.
This is more empowering for the Drupal site builder persona, because they can’t shoot themselves in the foot anymore. It’s no longer necessary to learn the complex edge cases in each contributed module’s domain, because they’re no longer exposed in the UI. In other words: domain complexities no longer leak into the UI.
At the same time, it hugely decreases the risk of burnout in module maintainers!
And of course: use the CDN module, it’s rock solid! :)
Related reading
Finally, read Amitai Burstein’s “OG8 Development Mindset”! He makes very similar observations, albeit about a much bigger contributed module (Organic Groups). Some of my favorite quotes:
About edge cases & complexity:
Edge cases are no longer my concern. I mean, I’m making sure that edge cases can be done and the API will cater to it, but I won’t go too far and implement them. […] we’ve somewhat reduced the flexibility in order to reduce the complexity; but while doing so, made sure edge cases can still hook into the process.
About tests:
I think there is another hidden merit in tests. By taking the time to carefully go over your own code - and using it - you give yourself some pause to think about the necessity of your recently added code. Do you really need it? If you are not afraid of writing code and then throwing it out the window, and you are true to yourself, you can create a better, less complex, and polished module.
About feature set &UI:
One of the mistakes that I feel made in OG7 was exposing a lot of the advanced functionality in the UI. […] But these are all advanced use cases. When thinking about how to port them to OG8, I think found the perfect solution: we did’t port it.
I also did my bachelor thesis about Drupal + CDN integration, which led to the Drupal 6 version of the module. ↩︎
Unit tests in Drupal 8 are wonderful, they’re nigh impossible in Drupal 7. They finish running in seconds. ↩︎
I've been writing a weekly newsletter on Linux & open source technologies for nearly 2 years now at cron.weekly and to this day I'm amazed by all the feedback and response I've gotten from it. What's even more fun is watching the subscriber base grow on a weekly basis, to over 6.000 users already!
At every issue there's good feedback on the projects I list, comments on the stories, ... it's so much fun!
But there's a downside ...
Taking cron.weekly to the next level
All that feedback has been directed at me. I've been mailing thousands of Linux enthousiasts every week and it's been one-way communication. I talk, they listen. That seems like a waste of potential.
Imagine if every one on that newsletter could share their knowledge and expertise and connect to one another? I'm definitely not the smartest person to be mailing them all, there are far more intelligent folks subscribed. I want to get them involved!
To start building a community around cron.weekly I've launched the cron.weekly forum. Yes, there are things like Reddit, Stack Overflow, Quora, ... and all other fun places to discuss topics. But there's one thing they don't have; the like minded people that subscribe to cron.weekly that each have the same interest at heart: caring about Linux, open source & web technologies.
Launching the forum is an experiment, there's already so many places out there to "waste your time" online, but I'm confident it can become an interactive place to discuss new technologies, share ideas or launch new open source projects.
If questions posted to the forum remain unanswered, I'll call upon the great powers of cron.weekly subscribers to highlight them and raise awareness, get more eyes on the topic & find the best answer possible.
The last cron.weekly issue #88 already had a "Ask cron.weekly" section to get that started.
I'm confident about the future of that forum, I think the newsletter can be a great way to get more attention to difficult-to-solve questions and allow cron.weekly to actively help its community members.
If you visit Acquia's homepage today, you will be greeted by this banner:
We've published this banner in solidarity with the hundreds of companies who are voicing their support of net neutrality.
Net neutrality regulations ensure that web users are free to enjoy whatever sites they choose without interference from Internet Service Providers (ISPs). These protections establish an open web where people can explore and express their ideas. Under the current administration, the U.S. Federal Communications Commision favors less-strict regulation of net neutrality, which could drastically alter the way that people experience and access the web. Today, Acquia is joining the ranks of companies like Amazon, Atlassian, Netflix and Vimeo to advocate for strong net neutrality regulations.
Why the FCC wants to soften net neutrality regulations
In 2015, the United States implemented strong protections favoring net neutrality after ISPs were classified as common carriers under Title II of the Communications Act of 1934. This classification catalogs broadband as an "essential communication service", which means that services are to be delivered equitably and costs kept reasonable. Title II was the same classification granted to telcos decades ago to ensure consumers had fair access to phone service. Today, the Title II classification of ISPs protects the open internet by making paid prioritization, blocking or throttling of traffic unlawful.
The issue of net neutrality is coming under scrutiny since to the appointment of Ajit Pai as the Chairman of the Federal Communications Commission. Pai favors less regulation and has suggested that the net neutrality laws of 2015 impede the ISP market. He argues that while people may support net neutrality, the market requires more competition to establish faster and cheaper access to the Internet. Pai believes that net neutrality regulations have the potential to curb investment in innovation and could heighten the digital divide. As FCC Chairman, Pai wants to reclassify broadband services under less-restrictive regulations and to eliminate definitive protections for the open internet.
In May 2017, the three members of the Federal Communications Commission voted 2-1 to advance a plan to remove Title II classification from broadband services. That vote launched a public comment period, which is open until mid August. After this period the commission will take a final vote.
Why net neutrality protections are good
I strongly disagree with Pai's proposed reclassification of net neutrality. Without net neutrality, ISPs can determine how users access websites, applications and other digital content. Today, both the free flow of information, and exchange of ideas benefit from 'open highways'. Net neutrality regulations ensure equal access at the point of delivery, and promote what I believe to be the fairest competition for content and service providers.
If the FCC rolls back net neutrality protections, ISPs would be free to charge site owners for priority service. This goes directly against the idea of an open web, which guarantees a unfettered and decentralized platform to share and access information. There are many challenges in maintaining an open web, including "walled gardens" like Facebook and Google. We call them "walled gardens" because they control the applications, content and media on their platform. While these closed web providers have accelerated access and adoption of the web, they also raise concerns around content control and privacy. Issues of net neutrality contribute a similar challenge.
When certain websites have degraded performance because they can't afford the premiums asked by ISPs, it affects how we explore and express ideas online. Not only does it drive up the cost of maintaining a website, but it undermines the internet as an open space where people can explore and express their ideas. It creates a class system that puts smaller sites or less funded organizations at a disadvantage. Dismantling net neutrality regulations raises the barrier for entry when sharing information on the web as ISPs would control what we see and do online. Congruent with the challenge of "walled gardens", when too few organizations control the media and flow of information, we must be concerned.
In the end, net neutrality affects how people, including you and me, experience the web. The internet's vast growth is largely a result of its openness. Contrary to Pai's reasoning, the open web has cultivated creativity, spawned new industries, and protects the free expression of ideas. At Acquia, we believe in supporting choice, competition and free speech on the internet. The "light touch" regulations now proposed by the FCC may threaten that very foundation.
What you can do today
If you're also concerned about the future of net neutrality, you can share your comments with the FCC and the U.S. Congress (it will only take you a minute!). You can do so through Fight for the Future, who organized today's day of action. The 2015 ruling that classified broadband service under Title II came after the FCC received more than 4 million comments on the topic, so let your voice be heard.
I published the following diary on isc.sans.org: “Backup Scripts, the FIM of the Poor“.
File Integrity Management or “FIM” is an interesting security control that can help to detect unusual changes in a file system. By example, on a server, they are directories that do not change often. Example with a UNIX environment:
Binaries & libraries in /usr/lib, /usr/bin, /bin, /sbin, /usr/local/bin, …
A QEventLoop is a heavy dependency. Not every worker thread wants to require all its consumers to have one. This renders QueuedConnection not always suitable. I get that signals and slots are a useful mechanism, also for thread-communications. But what if your worker thread has no QEventLoop yet wants to wait for a result of what another worker-thread produces?
QWaitCondition is often what you want. Don’t be afraid to use it. Also, don’t be afraid to use QFuture and QFutureWatcher.
Just be aware that the guys at Qt have not yet decided what the final API for the asynchronous world should be. The KIO guys discussed making a QJob and/or a QAbstractJob. Because QFuture is result (of T) based (and waits and blocks on it, using a condition). And a QJob (derived from what currently KJob is), isn’t or wouldn’t or shouldn’t block (such a QJob should allow for interactive continuation, for example — “overwrite this file? Y/N”). Meanwhile you want a clean API to fetch the result of any asynchronous operation. Blocked waiting for it, or not. It’s an uneasy choice for an API designer. Don’t all of us want APIs that can withstand the test of time? We do, yes.
Yeah. The world of programming is, at some level, complicated. But I’m also sure something good will come out of it. Meanwhile, form your asynchronous APIs on the principles of QFuture and or KJob: return something that can be waited for.
Sometimes a prediction of how it will be like is worth more than a promise. I honestly can’t predict what Thiago will approve, commit or endorse. And I shouldn’t.