Quantcast
Channel: Planet Grep
Viewing all 4959 articles
Browse latest View live

Frank Goossens: No REST for the wicked

$
0
0

After the PR-beating WordPress took with the massive defacements of non-upgraded WordPress installations, it is time to revisit the point-of-view of the core-team that the REST API should be active for all and that no option should be provided to disable it (as per the decisions not options philosophy). I for one installed the “Disable REST API” plugin.

YouTube Video
Watch this video on YouTube.


Xavier Mertens: [SANS ISC Diary] Analysis of a Suspicious Piece of JavaScript

$
0
0

I published the following diary on isc.sans.org: “Analysis of a Suspicious Piece of JavaScript“.

What to do on a cloudy lazy Sunday? You go hunting and review some alerts generated by your robots. Pastebin remains one of my favourite playground and you always find interesting stuff there. In a recent diary, I reported many malicious PE files stored in Base64 but, today, I found a suspicious piece of JavaScript code… [Read more]

[The post [SANS ISC Diary] Analysis of a Suspicious Piece of JavaScript has been first published on /dev/random]

Xavier Mertens: Think Twice before Posting Data on Pastebin!

$
0
0

Pastebin.com is one of my favourite playground. I’m monitoring the content of all pasties posted on this website. My goal is to find juicy data like configurations, database dumps, leaks of credentials. Sometimes you can find also malicious binary files.

For sure, I knew that I’m not the only one to have interests in the pastebin.com content.  Plenty of researchers or organizations like CERT’s and SOC’s are doing the same but I was very surprised by the number of hits that I got on my latest pastie:

Pastebin Hits

For the purpose of my last ISC diary, I posted some data on pastebin.com and did not communicate the link by any mean. Before posting the diary, I had a quick look at my pastie and it had already 105 unique views! It was posted only a few minutes before., think twice before posting data to

Conclusion: Think twice before posting data to pastebin. Even if you delete quickly your pastie, there are chances that it will be already scrapped by many robots (and mine! ;-))

[The post Think Twice before Posting Data on Pastebin! has been first published on /dev/random]

Mattias Geniar: PHP 7.2 to get modern cryptography into its standard library

$
0
0

The post PHP 7.2 to get modern cryptography into its standard library appeared first on ma.ttias.be.

This actually makes PHP the first language, over Erlang and Go, to get a secure crypto library in its core.

Of course, having the ability doesn't necessarily mean it gets used properly by developers, but this is a major step forward.

The vote for the Libsodium RFC has been closed. The final tally is 37 yes,0 no.

I'll begin working on the implementation with the desired API (sodium_*instead of \Sodium\*).

Thank you for everyone who participated in these discussions over the past year or so and, of course, everyone who voted for better cryptography inPHP 7.2.

Scott Arciszewski

@CiPHPerCoder

Source: php.internals: [RFC][Vote] Libsodium vote closes; accepted (37-0)

As a reminder, the Libsodium RFC:

Title: PHP RFC: Make Libsodium a Core Extension

Libmcrypt hasn't been touched in eight years (last release was in 2007), leaving openssl as the only viable option for PHP 5.x and 7.0 users.

Meanwhile, libsodium bindings have been available in PECL for a while now, and has reached stability.

Libsodium is a modern cryptography library that offers authenticated encryption, high-speed elliptic curve cryptography, and much more. Unlike other cryptography standards (which are a potluck of cryptography primitives; i.e. WebCrypto), libsodium is comprised of carefully selected algorithms implemented by security experts to avoid side-channel vulnerabilities.

Source: PHP RFC: Make Libsodium a Core Extension

The post PHP 7.2 to get modern cryptography into its standard library appeared first on ma.ttias.be.

Luc Verhaegen: The beginning of the end of the RadeonHD driver.

$
0
0
Soon it will be a decade since we started the RadeonHD driver, where we pushed ATI to a point of no return, got a proper C coded graphics driver and freely accessible documentation out. We all know just what happened to this in the end, and i will make a rather complete write-up spanning multiple blog entries over the following months. But while i was digging out backed up home directories for information, i came across this...

It is a copy of the content of an email i sent to an executive manager at SuSE/Novell, a textfile called bridgmans_games.txt. This was written in July 2008, after the RadeonHD project gained what was called "Executive oversight". After the executive had first hand seen several of John Bridgmans games, he asked me to provide him with a timeline of some of the games he played with us. The below explanation covers only things that i knew that this executive was not aware of, and it covers a year of RadeonHD development^Wstruggle, from July 2007 until april 2007. It took me quite a while to compile this, and it was a pretty tough task mentally, as it finally showed, unequivocally, just how we had been played all along. After this email was read by this executive, he and my department lead took me to lunch and told me to let this die out slowly, and to not make a fuss. Apparently, I should've counted myself lucky to see these sort of games being played this early on in my career.

This is the raw copy, and only the names of some people, who are not in the public eye have been redacted out (you know who you are), some other names were clarified. All changes are marked with [].

These are:
SuSE Executive manager: [EXEC]
SuSE technical project manager: [TPM]
AMD manager (from a different department than ATI): [AMDMAN]
AMD liaison: [SANTA] (as he was the one who handed me and Egbert Eich a 1 cubic meter box full of graphics cards ;))

[EXEC], i made a rather extensive write-up about the goings-on throughout 
the whole project. I hope this will not intrude too much on your time, 
please have a quick read through it, mark what you think is significant and 
where you need actual facts (like emails that were sent around...) Of 
course, this is all very subjective, but for most of this emails can be 
provided to back things up (some things were said on the weekly technical 
call only).

First some things that happened beforehand...

Before we came around with the radeonhd driver, there were two projects 
going
already...

One was Dave Airlie who claimed he had a working driver for ATI Radeon R500
hardware. This was something he wrote during 2006, from information under 
NDA, and this nda didn't allow publication. He spent a lot of time bashing 
ATI for it, but the code was never seen, even when the NDA was partially 
lifted later on.

The other was a project started in March 2007 by Nokia's Daniel Stone and
Ubuntu's [canonical's] Matthew Garreth. The avivo driver. This driver was a
driver separate from the -ati or -radeon drivers, under the GPL, and written
from scratch by tracing the BIOS and tracking register changes [(only the
latter was true)]. Matthew and Daniel were only active contributors for the
first few weeks and quickly lost interest. The bulk of the development was
taken over by Jerome Glisse after that.

Now... Here is where we encounter John [Bridgman]...

Halfway july, when we first got into contact with Bridgman, he suddenly 
started bringing up AtomBIOS. When then also we got the first lowlevel 
hardware spec, but when we asked for hardware details, how the different 
blocks fitted together, we never got an answer as that information was said 
to not be available. We also got our hands on some of the atombios 
interpreter code, and a rudimentary piece of documentation explaining how to 
handle the atombios bytecode. So Matthias [Hopf] created an atombios 
disassembler to help us write up all the code for the different bits. And 
while we got some further register information when we asked for it, 
bridgman kept pushing atombios relentlessly.

This whining went on and on, until we, late august, decided to write up a
report of what problems we were seeing with atombios, both from an interface 
as from an actual implementation point of view. We never got even a single 
reply to this report, and we billed AMD 10 or so mandays, and [with] 
bridgman apparently berated, as he never brought the issue up again for the 
next few months... But the story of course didn't end there of course.

At the same time, we wanted to implement one function from atomBIOS at the
time: ASICInit. This function brings the hardware into a state where
modesetting can happen. It replaces a full int10 POST, which is the 
standard, antique way of bringing up VGA on IBM hardware, and ASICInit 
seemed like a big step forward. John organised a call with the BIOS/fglrx 
people to explain to us what ASICInit offered and how we could use it. This 
is the only time that Bridgman put us in touch with any ATI people directly. 
The proceedings of the call were kind of amusing... After 10 minutes or so, 
one of the ATI people said "for fglrx, we don't bother with ASICInit, we 
just call the int10 POST". John then quickly stated that they might have to 
end the call because apparently other people were wanting to use this 
conference room. The call went on for at least half an hour longer, and john 
from time to time stated this again. So whether there was truth to this or 
not, we don't know, it was a rather amusing coincidence though, and we of 
course never gained anything from the call.

Late august and early september, we were working relentlessly towards 
getting our driver into a shape that could be exposed to the public. 
[AMDMAN] had warned us that there was a big marketing thing planned for the 
kernel summit in september, but we never were told any of this through our 
partner directly. So we were working like maniacs on getting the driver in a 
state so that we could present it at the X Developers Summit in Cambridge.

Early september, we were stuck on several issues because we didn't get any
relevant information from Bridgman. In a mail sent on september 3, [TPM] 
made the following statement: "I'm not willing to fall down to Mr. B.'s mode 
of working nor let us hold up by him. They might (or not) be trying to stop 
the train - they will fail." It was very very clear then already that things 
were not being played straight.

Bridgman at this point was begging to see the code, so we created a copy of 
a (for us) working version, and handed this over on september the 3rd as 
well. We got an extensive review of our driver on a very limited subset of 
the hardware, and we were mainly bashed for producing broken modes (monitor 
synced off). This had of course nothing to do with us using direct 
programming, as this was some hardware limits [PLL] that we eventually had 
to go and find ourselves. Bridgman never was able to help provide us with 
suitable information on what the issue could be or what the right limits 
were, but the fact that the issue wasn't to be found in atombios versus 
direct programming didn't make him very happy.

So on the 5th of september, ATI went ahead with its marketing campaign, the 
one which we weren't told about at all. They never really mentioned SUSE's 
role in this, and Dave Airlie even made a blog entry stating how he and Alex 
were the main actors on the X side. The SUSE role in all of this was 
severely downplayed everywhere, to the point where it became borderline 
insulting. It was also one of the first times where we really felt that 
Bridgman maybe was working more closely with Dave and Alex than with us.

We then had to rush off to the X Developers Summit in Cambridge. While we 
met ATIs Matthew Tippett and Bridgman beforehand, it was clear that our role 
in this whole thing was being actively downplayed, and the attitude towards 
egbert and me was predominantly hostile. The talk bridgman and Tippett gave 
at XDS once again fully downplayed our effort with the driver. During our 
introduction of the radeonHD driver, John Bridgman handed Dave Airlie a CD 
with the register specs that went around the world, further reducing our 
relevance there. John stated, quite noisily, in the middle of our 
presentation, that he wanted Dave to be the first to receive this 
documentation. Plus, it became clear that Bridgman was very close to Alex 
Deucher.

The fact that we didn't catch all supper/dinner events ourselves worked 
against us as well, as we were usually still hacking away feverishly at our 
driver. We also had to leave early on the night of the big dinner, as we had 
to come back to nürnberg so that we could catch the last two days of the 
Labs conference the day after. This apparently was a mistake... We later 
found a picture (http://flickr.com/photos/cysglyd/1371922101/) that has the 
caption "avivo de-magicifying party". This shows Bridgman, Tippett, Deucher, 
all the redhat and intel people in the same room, playing with the (then 
competing) avivo driver. This picture was taken while we were on the plane 
back to nürnberg. Another signal that, maybe, Mr Bridgman isn't every 
committed to the radeonhd driver. Remarkably though, the main developer 
behind the avivo driver, Jerome Glisse, is not included here. He was found 
to be quite fond of the radeonhd driver and immediately moved on to 
something else as his goal was being achieved by radeonhd now.

On monday the 17th, right before the release, ATI was throwing up hurdles... 
We suddenly had to put AMD copyright statements on all our code, a 
requirement never made for any AMD64 work which is as far as i understood 
things also invalid under german copyright law. This was a tough nut for me 
to swallow, as a lot of the code in radeonhd modesetting started life 5years 
earlier in my unichrome driver. Then there was the fact that the atombios 
header didn't come with an acceptable license and copyright statement, and a 
similar situation for the atombios interpreter. The latter two gave Egbert a 
few extra hours of work that night.

I contacted the phoronix.com owner (Michael Larabel) to talk to him about an 
imminent driver release, and this is what he stated: "I am well aware of the 
radeonHD driver release today through AMD and already have some articles in 
the work for posting later today." The main open source graphics site had 
already been given a story by some AMD people, and we were apparently not 
supposed to be involved here. We managed to turn this article around there 
and then, so that it painted a more correct picture of the situation. And 
since then, we have been working very closely with Larabel to make sure that 
our message gets out correctly.

The next few months seemed somewhat uneventful again. We worked hard on 
making our users happy, on bringing our driver up from an initial 
implementation to a full featured modesetting driver. We had to 
painstakingly try to drag more information out of John, but mostly find 
things out for ourselves. At this time the fact that Dave Airlie was under 
NDA still was mentioned quite often in calls, and John explained that this 
NDA situation was about to change. Bridgman also brought up that he had 
started the procedure to hire somebody to help him with documentation work 
so that the flow of information could be streamlined. We kind of assumed 
that, from what we saw at XDS, this would be Alex Deucher, but we were never 
told anything.

Matthias got in touch with AMDs GPGPU people through [AMDMAN], and we were 
eventually (see later) able to get TCore code (which contains a lot of very
valuable information for us) out of this. Towards the end of oktober, i 
found an article online about AMDs upcoming Rv670, and i pasted the link in 
our radeonhd irc channel. John immediately fired off an email explaining 
about the chipset, apologising for not telling us earlier. He repeated some 
of his earlier promises of getting hardware sent out to us quicker. When the 
hardware release happened, John was the person making statements to the open 
source websites (messing up some things in the process), but [SANTA] was the 
person who eventually sent us a board.

Suddenly, around the 20th of november, things started moving in the radeon
driver. Apparently Alex Deucher and Dave Airlie had been working together 
since November 3rd to add support for r500 and r600 hardware to the radeon 
driver. I eventually dug out a statement on fedora devel, where redhat guys, 
on the last day of oktober, were actively refusing our driver from being 
accepted. Dave Airlie stated the following: "Red Hat and Fedora are 
contemplating the future wrt to AMD r5xx cards, all options are going to be 
looked at, there may even be options that you don't know about yet.."

They used chunks of code from the avivo driver, chunks of code from the
radeonhd driver (and the bits where we spent ages finding out what to do), 
and they implemented some parts of modesetting using AtomBIOS, just like 
Bridgman always wanted it. On the same day (20th) Alex posted a blog entry 
about being hired by ATI as an open source developer. Apparently, Bridgman 
mentioned that Dave had found something when doing AtomBIOS on the weekly 
phonecall beforehand. So Bridgman had been in close communication with Dave 
for quite a while already.

Our relative silence and safe working ground was shattered. Suddenly it was
completely clear that Bridgman was playing a double game. As some diversion
mechanisms, John now suddenly provided us with the TCore code, and suddenly
gave us access to the AMD NDA website, and also told us on the phone that 
Alex is not doing any Radeon work in his worktime and only does this in his 
spare time. Surely we could not be against this, as surely we too would like 
to see a stable and working radeon driver for r100-r400. He also suddenly 
provided those bits of information Dave had been using in the radeon driver, 
some of it we had been asking for before and never got an answer to then.

We quickly ramped up the development ourselves and got a 1.0.0 driver 
release out about a week later. John in the meantime was expanding his 
marketing horizon, he did an interview with the Beyond3D website (there is 
of course no mention about us), and he started doing some online forums 
(phoronix), replying to user posts there, providing his own views on 
"certain" situations (he has since massively increased his time on that 
forum).

One interesting result of the competing project is that suddenly we were
getting answers to questions we had been asking for a long time. An example 
of such an issue is the card internal view of where the cards own memory 
resides. We spent weeks asking around, not getting anywhere, and suddenly 
the registers were used in the competing driver, and within hours, we got 
the relevant information in emails from Mr Bridgman (November 17 and 20, 
"Odds and Ends"). Bridgman later explained that Dave Airlie knew these 
registers from his "previous r500 work", and that Dave asked Bridgman for 
clearance for release, which he got, after which we also got informed about 
these registers as well. The relevant commit message in the radeon driver 
predates the email we received with the related information by many hours.

[AMDMAN] had put us in touch with the GPGPU people from AMD, and matthias 
and one of the GPGPU people spent a lot of time emailing back and forth. But
suddenly, around the timeframe where everything else happened (competing
driver, alex getting hired), John suddenly conjured up the code that the 
GPGPU people had all along: TCore. This signalled to us that John had caught 
our plan to bypass him, and that he now took full control of the situation. 
It took about a month before John made big online promises about how this 
code could provide the basis for a DRM driver, and that it would become 
available to all soon. We managed to confirm, in a direct call with both the 
GPGPU people and Mr Bridgman that the GPGPU people had no idea about that 
John intended to make this code fully public straigth away. The GPGPU people 
assumed that Johns questions were fully aimed at handing us the code without 
us getting tainted, not that John intended to hand this out publically 
immediately. To this day, TCore has not surfaced publically, but we know 
that several people inside the Xorg community have this code, and i 
seriously doubt that all of those people are under NDA with AMD.

Bridgman also ramped up the marketing campaign. He did an interview with
Beyond3D.com on the 30th where he broadly smeared out the fact that a 
community member was just hired to work with the community. John of course 
completely overlooked the SUSE involvement in everything. An attempt to 
rectify this with an interview of our own to match never materialised due to 
the heavy time constraints we are under.

On the 4th of december, a user came to us asking what he should do with a
certain RS600 chipset he had. We had heard from John that this chip was not
relevant to us, as it was not supposed to be handled by our driver (the way 
we remember the situation), but when we reported this to John, he claimed 
that he thought that this hardware never shipped and that it therefor was 
not relevant. The hardware of course did ship, to three different vendors, 
and Egbert had to urgently add support for it in the I2C and memory 
controller subsystems when users started complaining.

One very notable story of this time is how the bring-up of new hardware was
handled. I mentioned the Rv670 before, we still didn't get this hardware at
this point, as [SANTA] was still trying to get his hands on this. What we 
did receive from [SANTA] on the 11th of december was the next generation 
hardware, which needed a lot of bring-up work: rv620/rv635. This made us 
hopeful that, for once, we could have at least basic support in our driver 
on the same day the hardware got announced. But a month and a half later, 
when this hardware was launched, John still hadn't provided any information. 
I had quite a revealing email exchange with [SANTA] about this too, where he 
wondered why John was stalling this. The first bit of information that was 
useful to us was given to us on February the 9th, and we had to drag a lot 
of the register level information out of John ourselves. Given the massive 
changes to the hardware, and the induced complications, it of course took us 
quite some time to get this work finished. And this fact was greedily abused 
by John during the bring-up time and afterwards. But what he always 
overlooks is that it took him close to two months to get us documentation to 
use even atombios successfully.

The week of december the 11th is also where Alex was fully assimilated into
ATI. He of course didn't do anything much in his first week at AMD, but in 
his second week he started working on the radeon driver again. In the weekly 
call then John re-assured us that Alex was doing this work purely in his 
spare time, that his task was helping us get the answers we needed. In all 
fairness, Alex did provide us with register descriptions more directly than 
John, but there was no way he was doing all this radeon driver work in his 
spare time. But it would take John another month to admit it, at which time 
he took Alex working on the radeon driver as an acquired right.

Bridgman now really started to spend a lot of time on phoronix.com. He 
posted what i personally see as rather biased comparisons of the different 
drivers out there, and he of course beat the drums heavily on atombios. This 
is also the time where he promised the TCore drop.

Then games continued as usual for the next month or so, most of which are
already encompassed in previous paragraphs.

One interesting sidenote is that both Alex and John were heavy on rebranding
AtomBIOS. They actively used the word scripts for the call tables inside
atombios, and were actively labelling C modesetting code as legacy. Both 
quite squarely go in against the nature of AtomBIOS versus C code.

Halfway february, discussions started to happen again about the RS690 
hardware. Users were having problems. After a while, it became clear that 
the RS690, all along, had a display block called DDIA capable of driving a 
DVI digital signal. RS690 was considered to be long and fully supported, and 
suddenly a major display block popped into life upon us. Bridgmans excuse 
was the same as with the RS600; he thought this never would be used in the 
first place and that therefor we weren't really supposed to know about this.

On the 23rd and 24th of February, we did FOSDEM. As usual, we had an X.org 
Developers Room there, for which i did most of the running around. We worked
with Michael Larabel from phoronix to get the talks taped and online. 
Bridgman gave us two surprises at FOSDEM... For one, he released 3D 
documentation to the public, we learned about this just a few days before, 
we even saw another phoronix statement about TCore being released there and 
then, but this never surfaced.

We also learned, on friday evening, that the radeon driver was about to gain
Xvideo support for R5xx and up, through textured video code that alex had 
been working on. Not only did they succeed in stealing the sunshine away 
from the actual hard work (organising fosdem, finding out and implementing 
bits that were never supposed to be used in the first place, etc), they gave 
the users something quick and bling and showy, something we today still 
think was provided by others inside AMD or generated by a shader compiler. 
And this to the competing project.

At FOSDEM itself Bridgman of course was full of stories. One of the most
remarkable ones, which i overheard when on my knees clearing up the 
developers room, was bridgman talking to the Phoronix guy and some community 
members, stating that people at ATI actually had bets running on how long it 
would take Dave Airlie to implement R5xx 3D support for the radeon driver. 
He said this loudly, openly and with a lot of panache. But when this support 
eventually took more than a month, i took this up with him on the phonecall, 
leading of course to his usual nervous laughing and making up a bit of a 
coverstory again.

Right before FOSDEM, Egbert had a one-on-one phonecall with Bridgman, hoping 
to clear up some things. This is where we first learned that bridgman had 
sold atombios big time to redhat, but i think you now know this even better 
than i do, as i am pretty hazy on details there. Egbert, if i remember 
correctly, on the very same day, had a phonecall with redhat's Kevin Martin 
as well. But since nothing of this was put on mail and i was completely 
swamped with organising FOSDEM, i have no real recollection of what came out 
of that.

Alex then continued with his work on radeon, while being our only real 
point of contact for any information. There were several instances where our 
requests for information resulted in immediate commits to the radeon driver, 
where issues were fixed or bits of functionality were added. Alex also ended 
up adding RENDER acceleration to the radeon driver, and when Bridgman was in
Nuernberg, we clearly saw how Bridgman sold this to his management: alex was
working on bits that were meant to be ported directly to RadeonHD.

In march, we spent our time getting rv620/635 working, dragging information,
almost register per register out of our friends. We learned about the 
release of RS780 at CEBIT, meaning that we had missed yet another 
opportunity for same day support. John had clearly stated, at the time of 
the rv620/635 launch that "integrated parts are the only place where we are 
'officially' trying to have support in place at or very shortly after 
launch".

And with that, we pretty much hit the time frame when you, [EXEC], got
involved...

One interesting kind of new development is Kgrids. Kgrids is some code 
written by some ATI people that has some useful information for driver 
development, especially useful for the DRM. John Bridgman told the phoronix 
author 2-3 weeks ago already that he had this, and that he was planning to 
release this immediately. The phoronix author immediately informed me, but 
the release of this code never happened so far. Last thursday, Bridgman 
brought up Kgrids in the weekly technical phonecall already... When asked 
how long he knew about this, he admitted that they knew about this for 
several weeks already, which makes me wonder why we weren't informed 
earlier, and why we suddenly were informed then...


As said, the above, combined with what already was known to this executive, and the games he saw being played first hand from april 2008 throughout june 2008, marked the start of the end of the RadeonHD project.

I too more and more disconnected myself from development on this, with Egbert taking the brunt of the work. I instead spent more time on the next SuSE enterprise release, having fun doing something productive and with a future. Then, after FOSDEM 2009, when 24 people at the SuSE Nuernberg office were laid off, i was almost relieved to be amongst them.

Time flies.

Dries Buytaert: How Nasdaq offers a Drupal distribution as-a-service

$
0
0

Nasdaq CIO and vice president Brad Peterson at the Acquia Engage conference showing the Drupal logo on Nasdaq's MarketSite billboard at Times Square NYC

Last October, I shared the news that Nasdaq Corporate Solutions has selected Acquia and Drupal 8 for its next generation Investor Relations and Newsroom Website Platforms. 3,000 of the largest companies in the world, such as Apple, Amazon, Costco, ExxonMobil and Tesla are currently eligible to use Drupal 8 for their investor relations websites.

How does Nasdaq's investor relations website platform work?

First, Nasdaq developed a "Drupal 8 distribution" that is optimized for creating investor relations sites. They started with Drupal 8 and extended it with both contributed and custom modules, documentation, and a default Drupal configuration. The result is a version of Drupal that provides Nasdaq's clients with an investor relations website out-of-the-box.

Next, Nasdaq decided to offer this distribution "as-a-service" to all of their publicly listed clients through Acquia Cloud Site Factory. By offering it "as-a-service", Nasdaq's customers don't have to worry about installing, hosting, upgrading or maintaining their investor relations site. Nasdaq's new IR website platform also ensures top performance, scalability and meets the needs of strict security and compliance standards. Having all of these features available out-of-the-box enables Nasdaq's clients to focus on providing their stakeholders with critical news and information.

Offering Drupal as a web service is not a new idea. In fact, I have been talking about hosted service models for distributions since 2007. It's a powerful model, and Nasdaq's Drupal 8 distribution as-a-service is creating a win-win-win-win. It's good for Nasdaq's clients, good for Nasdaq, good for Drupal, and in this case, good for Acquia.

It's good for Nasdaq's customers because it provides them with a platform that incorporates the best of both worlds; it gives them the maintainability, reliability, security and scalability that comes with a cloud offering, while still providing the innovation and freedom that comes from using Open Source.

It is great for Nasdaq because it establishes a business model that leverages Open Source. It's good for Drupal because it encourages Nasdaq to invest back into Drupal and their Drupal distribution. And it's obviously good for Acquia as well, because we get to sell our Acquia Site Factory Platform.

If you don't believe me, take Nasdaq's word for it. In the video below, which features Stacie Swanstrom, executive vice president and head of Nasdaq Corporate Solutions, you can see how Nasdaq pitches the value of this offering to their customers. Swanstrom explains that with Drupal 8, Nasdaq's IR Website Platform brings "clients the advantages of open source technology, including the ability to accelerate product enhancements compared to proprietary platforms".

Wim Leers: A career thanks to open source

Julien Pivotto: Augeas resource for mgmt

$
0
0

Last week, I joined the mgmt hackathon, just after Config Management Camp Ghent. It helped me understanding how mgmt actually works and that helped me to introduce two improvements in the codebase: prometheus support, and an augeas resource.

I will blog later about the prometheus support, today I will focus about the Augeas resource.

Defining a resource

Currently, mgmt does not have a DSL, it only uses plain yaml.

Here is how you define an Augeas resource:

---
graph: mygraph
resources:
  augeas:
  - name: sshd_config
    lens: Sshd.lns
    file: "/etc/ssh/sshd_config"
    sets:
      - path: X11Forwarding
        value: no
edges:

As you can see, the augeas resource takes several parameters:

  • lens: the lens file to load
  • file: the path to the file that we want to change
  • sets: the paths/values that we want to change

Setting file will create a Watcher, which means that each time you change that file, mgmt will check if it is still aligned with what you want.

Code

The code can be found there: https://github.com/purpleidea/mgmt/pull/128/files

We are using go bindings for Augeas: https://github.com/dominikh/go-augeas/ Unfortunately, those bindings only support a recent version of Augeas. It was needed to vendor it to make it build on travis.

Future plans

Future plans regarding this resource is to add some parameters, probably a parameter to use as Puppet “onlyif” parameter, and a “rm” parameter.


Dries Buytaert: Distributions remain a growing opportunity for Drupal

$
0
0

Yesterday, after publishing a blog post about Nasdaq's Drupal 8 distribution for investor relations websites, I realized I don't talk enough about "Drupal distributions" on my blog. The ability for anyone to take Drupal and build their own distribution is not only a powerful model, but something that is relatively unique to Drupal. To the best of my knowledge, Drupal is still the only content management system that actively encourages its community to build and share distributions.

A Drupal distribution packages a set of contributed and custom modules together with Drupal core to optimize Drupal for a specific use case or industry. For example, Open Social is a free Drupal distribution for creating private social networks. Open Social was developed by GoalGorilla, a digital agency from the Netherlands. The United Nations is currently migrating many of their own social platforms to Open Social.

Another example is Lightning, a distribution developed and maintained by Acquia. While Open Social targets a specific use case, Lightning provides a framework or starting point for any Drupal 8 project that requires more advanced layout, media, workflow and preview capabilities.

For more than 10 years, I've believed that Drupal distributions are one of Drupal's biggest opportunities. As I wrote back in 2006: Distributions allow us to create ready-made downloadable packages with their own focus and vision. This will enable Drupal to reach out to both new and different markets..

To capture this opportunity we needed to (1) make distributions less costly to build and maintain and (2) make distributions more commercially interesting.

Making distributions easier to build

Over the last 12 years we have evolved the underlying technology of Drupal distributions, making them even easier to build and maintain. We began working on distribution capabilities in 2004, when the CivicSpace Drupal 4.6 distribution was created to support Howard Dean's presidential campaign. Since then, every major Drupal release has advanced Drupal's distribution building capabilities.

The release of Drupal 5 marked a big milestone for distributions as we introduced a web-based installer and support for "installation profiles", which was the foundational technology used to create Drupal distributions. We continued to make improvements to installation profiles during the Drupal 6 release. It was these improvements that resulted in an explosion of great Drupal distributions such as OpenAtrium (an intranet distribution), OpenPublish (a distribution for online publishers), Ubercart (a commerce distribution) and Pressflow (a distribution with performance and scalability improvements).

Around the release of Drupal 7, we added distribution support to Drupal.org. This made it possible to build, host and collaborate on distributions directly on Drupal.org. Drupal 7 inspired another wave of great distributions: Commerce Kickstart (a commerce distribution), Panopoly (a generic site building distribution), Opigno LMS (a distribution for learning management services), and more! Today, Drupal.org lists over 1,000 distributions.

Most recently we've made another giant leap forward with Drupal 8. There are at least 3 important changes in Drupal 8 that make building and maintaining distributions much easier:

  1. Drupal 8 has vastly improved dependency management for modules, themes and libraries thanks to support for Composer.
  2. Drupal 8 ships with a new configuration management system that makes it much easier to share configurations.
  3. We moved a dozen of the most commonly used modules into Drupal 8 core (e.g. Views, WYSIWYG, etc), which means that maintaining a distribution requires less compatibility and testing work. It also enables an easier upgrade path.

Open Restaurant is a great example of a Drupal 8 distribution that has taken advantage of these new improvements. The Open Restaurant distribution has everything you need to build a restaurant website and uses Composer when installing the distribution.

More improvements are already in the works for future versions of Drupal. One particularly exciting development is the concept of "inheriting" distributions, which allows Drupal distributions to build upon each other. For example, Acquia Lightning could "inherit" the standard core profile – adding layout, media and workflow capabilities to Drupal core, and Open Social could inherit Lightning - adding social capabilities on top of Lightning. In this model, Open Social delegates the work of maintaining Layout, Media, and Workflow to the maintainers of Lightning. It's not too hard to see how this could radically simplify the maintenance of distributions.

The less effort it takes to build and maintain a distribution, the more distributions will emerge. The more distributions that emerge, the better Drupal can compete with a wide range of turnkey solutions in addition to new markets. Over the course of twelve years we have improved the underlying technology for building distributions, and we will continue to do so for years to come.

Making distributions commercially interesting

In 2010, after having built a couple of distributions at Acquia, I used to joke that distributions are the "most expensive lead generation tool for professional services work". This is because monetizing a distribution is hard. Fortunately, we have made progress on making distributions more commercially viable.

At Acquia, our Drupal Gardens product taught us a lot about how to monetize a single Drupal distribution through a SaaS model. We discontinued Drupal Gardens but turned what we learned from operating Drupal Gardens into Acquia Cloud Site Factory. Instead of hosting a single Drupal distribution (i.e. Drupal Gardens), we can now host any number of Drupal distributions on Acquia Cloud Site Factory.

This is why Nasdaq's offering is so interesting; it offers a powerful example of how organizations can leverage the distribution "as-a-service" model. Nasdaq has built a custom Drupal 8 distribution and offers it as-a-service to their customers. When Nasdaq makes money from their Drupal distribution they can continue to invest in both their distribution and Drupal for many years to come.

In other words, distributions have evolved from an expensive lead generation tool to something you can offer as a service at a large scale. Since 2006 we have known that hosted service models are more compelling but unfortunately at the time the technology wasn't there. Today, we have the tools that make it easier to deploy and manage large constellations of websites. This also includes providing a 24x7 help desk, SLA-based support, hosting, upgrades, theming services and go-to-market strategies. All of these improvements are making distributions more commercially viable.

Xavier Mertens: [SANS ISC Diary] How was your stay at the Hotel La Playa?

$
0
0

I published the following diary on isc.sans.org: “How was your stay at the Hotel La Playa?“.

I made the following demo for a customer in the scope of a security awareness event. When speaking to non-technical people, it’s always difficult to demonstrate how easily attackers can abuse of their devices and data. If successfully popping up a “calc.exe” with an exploit makes a room full of security people crazy, it’s not the case for “users”. It is mandatory to demonstrate something that will ring a bell in their mind… [Read more]

[The post [SANS ISC Diary] How was your stay at the Hotel La Playa? has been first published on /dev/random]

Xavier Mertens: Integrating OpenCanary & DShield

$
0
0

Being a volunteer for the SANS Internet Storm Center, I’m a big fan of the DShield service. I think that I’m feeding DShield with logs for eight or nine years now. In 2011, I wrote a Perl script to send my OSSEC firewall logs to DShield. This script has been running and pushing my logs every 30 mins for years. Later, DShield was extended to collect other logs: SSH credentials collected by honeypots (if you’ve a unused Raspberry Pi, there is a nice setup of a honeypot available). I’ve my own network of honeypots spread here and there on the Wild Internet, running Cowrie. But recently, I reconfigured all of them to use another type of honeypot: OpenCanary.

Why OpenCanary? Cowrie is a very nice honeypot which can emulate a fake vulnerable host, log commands executed by the attackers and also collect dropped files. Here is an example of Cowrie session replayed in Splunk:

Splunk Honeypot Session Replay

It’s nice to capture a lot of data but most of them (to not say “all of them”) are generated by bots. Honestly, I never detected a human attacker trying to abuse of my SSH honeypots. That’s why I decided to switch to OpenCanary. It does not record a detailed log as Cowrie but it is very modular and supports by default the following protocols:

  • FTP
  • HTTP
  • Proxy
  • MSSQL
  • MySQL
  • NTP
  • Portscan
  • RDP
  • Samba
  • SIP
  • SNMP
  • SSH
  • Telnet
  • TFTP
  • VNC

Writing extra modules is very easy, examples are provided. By default, OpenCanary is able to write logs to the console, a file, Syslog, a JSON feed over TCP or an HPFeed. There is no DShield support by default? Never mind, let’s add it.

As I said, OpenCanary is very modular and a new logging capability is just a new Python class in the logger.py module:

class DShieldHandler(logging.Handler):
    def __init__(self, dshield_userid, dshield_authkey, allowed_ports):
        logging.Handler.__init__(self)
        self.dshield_userid = str(dshield_userid)
        self.dshield_authkey = str(dshield_authkey)
        try:
            # Extract the list of allowed ports
            self.allowed_ports = map(int, str(allowed_ports).split(','))
        except:
            # By default, report only port 22
            self.allowed_ports = [ 22 ]

    def emit(self, record):
        ...

The DShield logger needs three arguments in your opencanary.conf file:

"logger": {
    "class" : "PyLogger",
    "kwargs" : {
        "formatters": {
            "plain": {
                "format": "%(message)s"
            }
        },
        "handlers": {
            "dshield": {
                "class": "opencanary.logger.DShieldHandler",
                "dshield_userid": "xxxxxx",
                "dshield_authkey": "xxxxxxxx",
                "allowed_ports": "22,23"
            }
        }
    }
}

The DShield UserID and authentication key are available in your DShield account. I added an ‘allowed_ports’ parameter that contains the list of interesting ports that will be reported to DShield (by default only SSH connections are reported). Now, I’m reporting many more connections attempts:

Daily Connections Report

Besides DShield, JSON logs are processed by my Splunk instance to generate interesting statistics:

OpenCanary Splunk Dashboard

A pull request has been submitted to the authors of OpenCanary to integrate my code. In the mean time, the code is available on my Github repository.

[The post Integrating OpenCanary & DShield has been first published on /dev/random]

Jeroen De Dauw: Why Every Single Argument of Dan North is Wrong

$
0
0

Alternative title: Dan North, the Straw Man That Put His Head in His Ass.

This blog post is a reply to Dans presentation Why Every Element of SOLID is Wrong. It is crammed full with straw man argumentation in which he misinterprets what the SOLID principles are about. After refuting each principle he proposes an alternative, typically a well-accepted non-SOLID principle that does not contradict SOLID. If you are not that familiar with the SOLID principles and cannot spot the bullshit in his presentation, this blog post is for you. The same goes if you enjoy bullshit being pointed out and broken down.

What follows are screenshots of select slides with comments on them underneath.

Dan starts by asking “What is the Single Responsibility Principle anyway”. Perhaps he should have figured that out before giving a presentation about how it is wrong.

A short (non-comprehensive) description of the principle: systems change for various different reasons. Perhaps a database expert changes the database schema for performance reasons, perhaps a User Interface person is reorganizing the layout of a web page, perhaps a developer changes business logic. What the Single Responsibility Principle says is that ideally changes for such disparate reasons do not affect the same code. If they did, different people would get in each others way. Possibly worse still, if the concerns are mixed together, and you want to change some UI code, suddenly you need to deal with, and thus understand, the business logic and database code.

How can we predict what is going to change? Clearly you can’t, and this is simply not needed to follow the Single Responsibility Principle or to get value out of it.

Write simple code… no shit. One of the best ways to write simple code is to separate concerns. You can be needlessly vague about it and simply state “write simple code”. I’m going to label this Dan North’s Pointlessly Vague Principle. Congratulations sir.

The idea behind the Open Closed Principle is not that complicated. To partially quote the first line on the Wikipedia Page (my emphasis):

… such an entity can allow its behaviour to be extended without modifying its source code.

In other words, when you ADD behavior, you should not have to change existing code. This is very nice, since you can add new functionality without having to rewrite old code. Contrast this to shotgun surgery, where to make an addition, you need to modify existing code at various places in the codebase.

In practice, you cannot gain full adherence to this principle, and you will have places where you will need to modify existing code. Full adherence to the principle is not the point. Like with all engineering principles, they are guidelines which live in a complex world of trade offs. Knowing these guidelines is very useful.

Clearly it’s a bad idea to leave in place code that is wrong after a requirement change. That’s not what this principle is about.

Another very informative “simple code is a good thing” slide.

To be honest, I’m not entirely sure what Dan is getting at with his “is-a, has-a” vs “acts-like-a, can-be-used-as-a”. It does make me think of the Interface Segregation Principle, which, coincidentally, is the next principle he misinterprets.

The remainder of this slide is about the “favor compositions about inheritance” principle. This is really good advice, which has been well-accepted in professional circles for a long time. This principle is about code sharing, which is generally better done via composition than inheritance (the later creates very strong coupling). In the last big application I wrote I have several 100s of classes and less than a handful inherit concrete code. Inheritance has a use completely different from code reuse: sub-typing and polymorphism. I won’t go into detail about those here, and will just say that this is at the core of what Object Orientation is about, and that even in the application I mentioned, this is used all over, making the Liskov Substitution Principle very relevant.

Here Dan is slamming the principle for being too obvious? Really?

“Design small , role-based classes”. Here Dan changed “interfaces” into “classes”. Which results in a line that makes me think of the Single Responsibility Principle. More importantly, there is a misunderstanding about the meaning of the word “interface” here. This principle is about the abstract concept of an interface, not the language construct that you find in some programming languages such as Java and PHP. A class forms an interface. This principle applies to OO languages that do not have an interface keyword such as Python and even to those that do not have a class keyword such as Lua.

If you follow the Interface Segregation Principle and create interfaces designed for specific clients, it becomes much easier to construct or invoke those clients. You won’t have to provide additional dependencies that your client does not actually care about. In addition, if you are doing something with those extra dependencies, you know this client will not be affected.

This is a bit bizarre. The definition Dan provides is good enough, even though it is incomplete, which can be excused by it being a slide. From the slide it’s clear that the Dependency Inversion Principle is about dependencies (who would have guessed) and coupling. The next slide is about how reuse is overrated. As we’ve already established, this is not what the principle is about.

As to the DIP leading to DI frameworks that you than depend on… this is like saying that if you eat food you might eat non-nutritious food which is not healthy. The fix here is to not eat non-nutritious food, it is not to reject food altogether. Remember the application I mentioned? It uses dependency injection all the way, without using any framework or magic. In fact, 95% of the code does not bind to the web-framework used due to adherence to the Dependency Injection Principle. (Read more about this application)

That attitude explains a lot about the preceding slides.

Yeah, please do write simple code. The SOLID principles and many others can help you with this difficult task. There is a lot of hard-won knowledge in our industry and many problems are well understood. Frivolously rejecting that knowledge with “I know better” is an act of supreme arrogance and ignorance.

I do hope this is the category Dan falls into, because the alternative of purposefully misleading people for personal profit (attention via controversy) rustles my jimmies.

If you’re not familiar with the SOLID principles, I recommend you start by reading their associated Wikipedia pages. If you are like me, it will take you practice to truly understand the principles and their implications and to find out where they break down or should be superseded. Knowing about them and keeping an open mind is already a good start, which will likely lead you to many other interesting principles and practices.

Frank Goossens: Music from our Tube (& Nova); Sampha

Wim Leers: OpenTracker

$
0
0

This is an ode to Dirk Engling’s OpenTracker.

It’s a BitTorrent tracker.

It’s what powered The Pirate Bay in 2007–2009.

I’ve been using it to power the downloads on http://driverpacks.net since the end of November 2010. >6 years. It facilitated 9839566 downloads since December 1, 2010 until today. That’s almost 10 million downloads!

Stability

It’s one of the most stable pieces of software I ever encountered. I compiled it in 2010, it never once crashed. I’ve seen uptimes of hundreds of days.

wim@ajax:~$ ls -al /data/opentracker
total 456
drwxr-xr-x  3 wim  wim   4096 Feb 11 01:02 .
drwxr-x--x 10 root wim   4096 Mar  8  2012 ..
-rwxr-xr-x  1 wim  wim  84824 Nov 29  2010 opentracker
-rw-r--r--  1 wim  wim   3538 Nov 29  2010 opentracker.conf
drwxr-xr-x  4 wim  wim   4096 Nov 19  2010 src
-rw-r--r--  1 wim  wim 243611 Nov 19  2010 src.tgz
-rwxrwxrwx  1 wim  wim  14022 Dec 24  2012 whitelist

Simplicity

The simplicity is fantastic. Getting up and running is incredibly simple: git clone git://erdgeist.org/opentracker .; make; ./opentracker and you’re up and running. Let me quote a bit from its homepage, to show that it goes the extra mile to make users successful:

opentracker can be run by just typing ./opentracker. This will make opentracker bind to 0.0.0.0:6969 and happily serve all torrents presented to it. If ran as root, opentracker will immediately chroot to . and drop all priviliges after binding to whatever tcp or udp ports it is requested.

Emphasis mine. And I can’t emphasize my emphasis enough.

Performance & efficiency

All the while handling dozens of requests per second, opentracker causes less load than background processes of the OS. Let me again quote a bit from its homepage:

opentracker can easily serve multiple thousands of requests on a standard plastic WLAN-router, limited only by your kernels capabilities ;)

That’s also what the homepage said in 2010. It’s one of the reasons why I dared to give it a try. I didn’t test it on a “plastic WLAN-router”, but everything I’ve seen confirms it.

Flexibility

Its defaults are sane, but what if you want to have a whitelist?

  1. Uncomment the #FEATURES+=-DWANT_ACCESSLIST_WHITE line in the Makefile.
  2. Recompile.
  3. Create a file called whitelist, with one torrent hash per line.

Have a need to update this whitelist, for example a new release of your software to distribute? Of course you don’t want to reboot your opentracker instance and lose all current state. It’s got you covered:

  1. Append a line to whitelist.
  2. Send the SIGHUP UNIX signal to make opentracker reload its whitelist1.

Deployment

I’ve been in the process of moving off of my current (super reliable, but also expensive) hosting. There are plenty of specialized companies offering HTTP hosting2 and even rsync hosting3. Thanks to their standardization and consequent scale, they can offer very low prices.

But I also needed to continue to run my own BitTorrent tracker. There are no companies that offer that. I don’t want to rely on another tracker, because I want there to be zero affiliation with illegal files4. This is a BitTorrent tracker that does not allow anything to be shared: it only allows the software releases made by http://driverpacks.net to be downloaded.

So, I found the cheapest VPS I could find, with the least amount of resources. For USD $13.505, I got a VPS with 128 MBRAM, 12 GB of storage and 500 GB of monthly traffic. Then I set it up:

  1. ssh‘d onto it.
  2. rsync‘d over the files from my current server (alternatively: git clone and make)
  3. added @reboot /data/opentracker/opentracker -f /data/opentracker/opentracker.conf to my crontab.
  4. removed the CNAME record for tracker.driverpacks.net, and instead made it an A record pointing to my new VPS.
  5. watched http://tracker.driverpacks.net:6969/stats?mode=tpbs&format=txt on both the new and the old server, to verify traffic was moving over to my new cheap opentracker VPS as the DNS changes propagated

Drupal module

Since driverpacks.net runs on Drupal, there of course is an OpenTracker Drupal module to integrate the two (I wrote it). It provides an API to:

  • create .torrent files for certain files uploaded to Drupal
  • append to the OpenTracker whitelist file6
  • parse the statistics provided by the OpenTracker instance

You can see the live stats at http://driverpacks.net/stats.

Conclusion

opentracker is the sort of simple, elegant software design that makes it a pleasure to use. And considering the low commit frequency over the past decade, with many of those commits being nitpick fixes, it also seems its simplicity also leads to excellent maintainability. It involves the HTTP and BitTorrent protocols, yet only relies on a single I/O library, and its source code is very readable. Not only that, but it’s also highly scalable.

It’s the sort of software many of us aspire to write.

Finally, its license. A glorious license indeed!

The beerware license is very open, close to public domain, but insists on honoring the original author by just not claiming that the code is yours. Instead assume that someone writing Open Source Software in the domain you’re obviously interested in would be a nice match for having a beer with.

So, just keep the name and contact details intact and if you ever meet the author in person, just have an appropriate brand of sparkling beverage choice together. The conversation will be worth the time for both of you.

Dirk, if you read this: I’d love to buy you sparkling beverages some time :)


  1. kill -s HUP pidof opentracker ↩︎

  2. I’m using Gandi’s Simple Hosting↩︎

  3. https://rsync.net ↩︎

  4. Also: all my existing torrents use http://tracker.driverpacks.net:6969/announce as the URL. The free trackers I could find were all using udp://. So new leechers would then no longer be able to find the hundreds of existing seeders. Adding trackers to already-downloaded torrents is not possible. ↩︎

  5. $16.34 including 21% Belgian VAT↩︎

  6. Rather than having the Drupal module send a SIGHUP from within PHP, which requires elevated rights, I instead opted for a cronjob that runs every 10 minutes: */10 * * * * kill -s HUP pidof opentracker↩︎

Bjorn Monnens: Information overload

$
0
0

It’s been a while since my last post (18/01/2014) … This year I made a new year’s resolution to blog more (like I did in the good old days).

My first post since the silent years will be a bit of reflecting on why I actually haven’t blogged much and what I did instead. There are off course a lot of reasons why but in essence it boils down to a very simple reason. You set yourself priorities and whenever you don’t have much time, the things that don’t have a high priority you abandon … so blogging was for me not as important as my family, my work, … (you get the picture).

But there is actually another ‘good’ reason: Information overload. I’m constantly reading, viewing presentations, doing small Proof Of Concepts, … and doing these things has taken over producing content myself … I hope I can do things better in 2017 … let’s see 🙂

So What did I do in those last couple of years on a technical side:

Webpack + React + Redux

People who know me, are aware that I’m sometimes complaining about the javascript eco-system. On a regular basis I need to decide which technologies we are going to use to build a system that needs to be maintained for many years. During the last 4 years I’ve done assessments of which frontend framework to use. At times it really felt like the well known blogpost.

I’m pretty glad the Javascript eco system is stabilising (or so it seems). I decided to take a deep dive into Webpack, React, Rexud, …

simple conclusion: I’m glad I took the time to learn all these technologies a bit more in depth.

Vert.x

I’ve done several small NodeJS projects, for bigger applications I’m still more a fan of the Java / JVM world as it still feels more mature for me. However the concept of Node is simple and elegant, so if you have a similar solution in your preferred eco-system, you need to do some investigations … right?

simple conclusion: I’m glad I took the time to learn all these technologies a bit more in depth.

Amazon Web Services

Using one of the online platforms I took a pretty long track on a lot of the features of the AWS platform. I’ve been a frequent user of their services, but they are constantly inventing new things and there were still some blind spots for myself. That’s now much better with guided training and experiments.

simple conclusion: I’m glad I took the time to learn all these technologies a bit more in depth.

Scala course

On Coursera I followed a course on Scala. I followed already some talks and presentations, but never programmed it myself. During this session I was obliged to write Scala code myself. The least I can say is that it was … interesting 🙂

simple conclusion: I’m glad I took it, but will have to look for a more hands-on track as this was a bit to much focused on the theory instead of building real world applications.

NoSQL Tracks

I’ve done courses on MongoDB and Elastic Search. Both very nice products with their own positive and negative points. I’m using these technologies in several of our production systems and things are stable. Like all technologies there is some adapting and learning how they work in a production environment, but they fill the gap needed filling. So I’m glad we took this road.

simple conclusion: I’m glad I took the time to learn all these technologies a bit more in depth.

Mobile Development

My company builds server side applications, frontend applications and also mobile applications. In the previous company I founded, I wasn’t “officially” allowed to build mobile apps, so I had to beef up on this … which I did. In the meantime, I’ve build applications using Xamarin, Native Android, Native iOS (objective C, Swift is on the Todo) and Ionic (React Native is on the todo).

Very simple conclusion: doing what you love without technological impediments is how every developers life should be …

I’ll keep some more for my next post, but I’ll also do some more real technical articles.

See you soon (or so I hope)


Lionel Dricot: Les 3 piliers de la sécurité

$
0
0

La sécurité est un terme sur toutes les lèvres mais bien peu sont en mesure de la définir et de la concevoir rationnellement.

Je vous propose la définition suivante :

« La sécurité est l’ensemble des actions et des mesures mises en œuvre par une collectivité pour s’assurer que ses membres respectent les règles de la collectivité. »

Remarquons que la sécurité individuelle n’est garantie que si elle est explicite dans les règles de la collectivité. Si les règles précisent qu’il est permis de tuer, par exemple des esclaves, ces individus ne sont pas en sécurité.

Ces actions et mesures se divisent en trois grandes catégories, que j’appelle les trois piliers de la sécurité : la morale, les conséquences et le coût.

Le pilier moral

Le premier pilier est l’ensemble des incitants moraux qui encouragent l’individu à respecter les règles de la société. Ce pilier agit donc au niveau individuel et se transmet par l’éducation et la propagande.

Un excellent exemple de l’utilisation du pilier moral est le « piratage de musique ». À coup de propagande, les grandes maisons de disque ont fait entrer dans la tête des gens que télécharger une chanson équivalait virtuellement à voler voire blesser un artiste.

Cette affirmation est rationnellement absurde mais l’éducation morale a été telle que, aujourd’hui encore, pirater de la musique est perçu comme immoral. De manière amusante, ce phénomène est bien moindre avec les logiciels ou les séries télé car le côté « artiste spolié » est beaucoup moins pregnant dans l’imaginaire collectif pour ce type d’œuvres.

Le pilier des conséquences

Le second pilier est un facteur résultant de la multiplication du risque d’être pris en train de briser les règles avec la conséquence prévue en cas de flagrant délit.

Par exemple, malgré de nombreuses tentatives d’utilisation du pilier moral à travers les campagnes de la sécurité routière, la plupart des conducteurs ne respectent pas les limites de vitesse.

Des amendes ont donc été mises en place, parfois très élevée. Mais ces amendes ne sont pas dissuasives si le conducteur a l’impression que le risque d’être pris est nul. 0 multiplié par une grosse amende fait toujours 0.

Des contrôles radars ont donc été mis en place, en tentant de les garder secrets et d’interdire les détecteurs de radar. Mais là encore, l’efficacité s’est révélée limitée, le risque étant toujours perçu comme faible et relevant du « pas de chance ».

Par contre, l’installation de radars automatiques avec de grands panneaux « attention radar » a eu un effet drastique sur ces endroits en particuliers. Les conducteurs ralentissent et respectent la limitation, même si c’est pour une durée limitée.

Le pilier du coût

Enfin, il existe des situation où l’on se fiche de la morale et des conséquences. Le dernier pilier sécuritaire consiste donc à augmenter le coût nécessaire à briser les règles.

Ce coût peut prendre différentes formes : le temps, l’argent, l’expertise, le matériel.

Par exemple, je sais qu’un voleur de vélo se fout du pilier moral. Il a également peu de chances d’être pincé et donc n’a pas peur du pilier des conséquences. Je peux cependant légèrement augmenter pour lui le risque d’être pris en faisant tatouer mon vélo, mais c’est faible.

Par contre, je peux rendre le vol de mon vélo le plus coûteux possible en utilisant un très bon cadenas.

Voler mon vélo nécessitera donc plus de temps et plus de matériel que si mon cadenas était basique.

Les serrures sur votre porte ne sont qu’une augmentation du coût nécessaire pour rentrer chez vous sans la clé. Ce coût sera soit du temps (s’il faut fracturer la porte), soit en expertise (un serrurier vous ouvrira votre porte en quelques secondes).

Le théâtre sécuritaire

Toutes les mesures de sécurité qui sont prises doivent agir sur l’un de ces trois piliers. Des mesures peuvent même avoir des effets sur plusieurs piliers. En augmentant le temps nécessaire à enfreindre une règle (pilier du coût), on augmentera également la perception du risque d’être attrapé (pilier des conséquences).

Cependant, il existe également des mesures qui ne rentrent dans aucune de ces catégories. Ces mesures ne sont donc pas des mesures visant à augmenter la sécurité.

Par exemple, les militaires patrouillant dans les rues pour lutter contre le risque qu’un fou terroriste se fasse sauter. Les militaires n’ont clairement pas une influence sur le pilier moral. Ils n’ont pas d’influence sur le pilier des conséquences (un kamikaze se fout des conséquences). Et ils n’ont pas non plus d’influence sur le coût. Si vous voulez vous faire sauter, la présence de militaires armés dans les parages ne change rien à vos plans !

Cette analyse est donc importante car elle permet de détecter les mesures de non-sécurité. Ces mesures ne sont donc pas sécuritaires mais ont d’autres motivations. À titre d’exemple, les militaires dans la rue servent à donner l’impression à la population que le gouvernement agit. En effet, la seule action pertinente contre le terrorisme est le renseignement et l’action discrète mais la population aura alors l’impression que le gouvernement ne fait rien.

On appelle « security theatre » les mesures qui ne renforcent pas la sécurité mais ne servent qu’à donner l’image d’une sécurité renforcée. Dans certains cas, ces mesures sont justifiées (elles rassurent), dans d’autres, elles sont nocives (elles entraînent un sentiment de peur irrationnelle, sont elles-mêmes source d’insécurité).

Autre exemple : aux États-Unis, plusieurs états républicains ont mis en place des mesures pour soi-disant se protéger des fraudes électorales. Problème : ces mesures sont absolument inefficaces, adressent un problème dont l’existence n’a jamais été démontré mais elles ont un effet immédiat. Elle rende le vote très difficile voire impossible pour une grande partie des minorités et des populations les plus pauvres qui ont tendance à voter démocrate. Sous couvert de la sécurité, on prend des mesures dont l’objectif réel est d’avantager un parti.

Identifier les abus de sécurité

Lorsque des mesures de sécurité sont mises en place et qu’elles n’agissent efficacement sur aucun des 3 piliers, il est nécessaire d’être vigilant : la motivation n’est pas la sécurité mais certainement autre chose.

Si l’on prend des mesures pour soi-disant garantir votre sécurité, posez-vous toujours les bonnes questions :

– Est-ce que le problème est quantifié en termes de gravité et de probabilité ?
– Les mesures proposées adressent-elles efficacement au moins un des trois piliers ?
– Le coût et les conséquences de ces mesures sont-elles en relation avec le risque dont elles protègent ?

Mais si on applique la rationalité à la sécurité, on arrive à la conclusion effarante que pour nous protéger, nous devrions prendre des mesures drastiques pour réguler la circulation automobile, la qualité de notre alimentation et de l’air que nous respirons. Au lieu de ça, nous laissons nos émotions être manipulées, nous envoyons des soldats risquer leur vie un peu partout dans le monde ou nous luttons contre les freins à disque sur les vélos.

Au fond, nous ne cherchons pas la sécurité, nous cherchons à être rassuré sans devoir rien changer à notre mode de vie. Quoi de plus approprié pour cela qu’un ennemi commun et un régime totalitaire pour nous empêcher de penser ?

 

Photo par CWCS Managed Hosting.

Ce texte est a été publié grâce à votre soutien régulier sur Tipeee et sur Paypal. Je suis @ploum, blogueur, écrivain, conférencier et futurologue. Vous pouvez me suivre sur Facebook, Medium ou me contacter.

Ce texte est publié sous la licence CC-By BE.

Mattias Geniar: Linux kernel: CVE-2017-6074 – local privilege escalation in DCCP

$
0
0

The post Linux kernel: CVE-2017-6074 – local privilege escalation in DCCP appeared first on ma.ttias.be.

Patching time, again.

This is an announcement about CVE-2017-6074 [1] which is a double-free
vulnerability I found in the Linux kernel. It can be exploited to gain
kernel code execution from an unprivileged processes.

[oss-security] Linux kernel: CVE-2017-6074: DCCP double-free vulnerability (local root)

This privilege escalation exploit is active on pretty much every kernel in use out there. CentOS 5, 6 and 7 are vulnerable according to the kernel versions.

The oldest version that was checked is 2.6.18 (Sep 2006), which is
vulnerable. However, the bug was introduced before that, probably in
the first release with DCCP support (2.6.14, Oct 2005).

The kernel needs to be built with CONFIG_IP_DCCP for the vulnerability
to be present. A lot of modern distributions enable this option by
default.

[oss-security] Linux kernel: CVE-2017-6074: DCCP double-free vulnerability (local root)

Red Hat's bug tracker provides some mitigation tactics without updating the kernel and rebooting your box.

Recent versions of Selinux policy can mitigate this exploit. The steps below will work with SElinux enabled or disabled.

As the DCCP module will be auto loaded when required, its use can be disabled
by preventing the module from loading with the following instructions.

 # echo "install dccp /bin/true">> /etc/modprobe.d/disable-dccp.conf 

The system will need to be restarted if the dccp modules are loaded. In most circumstances the dccp kernel modules will be unable to be unloaded while any network interfaces are active and the protocol is in use.

If you need further assistance, see this KCS article ( https://access.redhat.com/solutions/41278 ) or contact Red Hat Global Support Services.

(CVE-2017-6074) CVE-2017-6074 kernel: use after free in dccp protocol

More details are hidden behind Red Hat's subscription wall, but the mitigation tactic shown above should be sufficient in most cases.

In fact, there don't seem to be updated kernel packages for CentOS just yet, so the above is -- at the time of writing -- the only mitigation tactic you have.

The post Linux kernel: CVE-2017-6074 – local privilege escalation in DCCP appeared first on ma.ttias.be.

Mattias Geniar: Kernel patching with kexec: updating a CentOS 7 kernel without a full reboot

$
0
0

The post Kernel patching with kexec: updating a CentOS 7 kernel without a full reboot appeared first on ma.ttias.be.

tl;dr: you can use kexec to stage a kernel upgrade in-memory without the need for a full reboot. Your system will reload the new kernel on the fly and activate it. There will be a service restart of every running service as the new kernel is loaded, but you skip the entire bootloader & hardware initialization.

By using kexec you can upgrade your running Linux machine's kernel without a full reboot. Keep in mind, there's still a new kernel load, but it's significantly faster than doing the whole bootloader stage and hardware initialization phase performed by the system firmware (BIOS or UEFI).

Yes, calling this kernel upgrades without reboots is a vast exaggeration. You skip parts of the reboot, though, usually the slowest parts.

Installing kexec

On a CentOS 7 machine the kexec tools should be installed by default, but just in case they aren't;

$ yum install kexec-tools

After that, the kexec binary should be available to you.

Install your new kernel

In this example I'll upgrade a rather old CentOS 7 kernel to the latest.

$ uname -r
3.10.0-229.14.1.el7

So I'm now running the 3.10.0-229.14.1.el7 kernel.

To upgrade your kernel, first install the latest kernel packages.

$ yum update kernel
...
===================================================================================
 Package                 Arch      Version                        Repository  Size
===================================================================================
Installing:
 kernel                  x86_64    3.10.0-514.6.1.el7             updates     37 M

This will install the

3.10.0-514.6.1.el7

kernel on my machine.

So a quick summary (on new lines, so you see the kernel version difference):

From: 3.10.0-229.14.1.el7
To: 3.10.0-514.6.1.el7

$ rpm -qa | grep kernel | sort
kernel-3.10.0-229.14.1.el7.x86_64
kernel-3.10.0-514.6.1.el7.x86_64

Once you installed the new kernel, it's time for the kexec in-memory upgrading magic.

In-memory kernel upgrade with kexec

As a safety command, unload any previously attempted kernels first. This is harmless and will make sure you start "cleanly" with your upgrade process.

$ kexec -u

Now, state the new kernel to be loaded. Note these are the version numbers of the latest installed kernel with yum, as shown above.

$ kexec -l /boot/vmlinuz-3.10.0-514.6.1.el7.x86_64 \
 --initrd=/boot/initramfs-3.10.0-514.6.1.el7.x86_64.img \
 --reuse-cmdline

Careful: next command will reload a new kernel and will impact running services!

Once prepared, start kexec.

$ systemctl kexec

Your system will freeze for a couple of seconds, load the new kernel and be good to go.

Some benchmarks

A very quick and unscientific benchmark of doing a yum update kernel with and without kexec.

Normal way, kernel upgrade + reboot: 28s
Kexec way, kernel upgrade + reload: 19s

So you have a couple of seconds of the new kernel load, for big physical machines with lots of RAM, this will be even more as the entire POST check can be skipped with this method.

Here's a side-by-side run of the same kernel update. On the left: the kexec flow you've read above. On the right, a classic yum update kernel && reboot.

Notice how the left VM never goes into the BIOS or POST checks.

If you're going to be automating these updates, have a look at some existing scripts to help you going: kexec-reboot, ArchWiki on kexec.

The post Kernel patching with kexec: updating a CentOS 7 kernel without a full reboot appeared first on ma.ttias.be.

Xavier Mertens: Am I Affected by Cloudbleed?

$
0
0

Yesterday, Cloudflare posted an incident report on their blog about an issue discovered in their HTML parser. A very nice report which is worth a read! As usual, in our cyber world, this vulnerability quickly received a nice name and logo: “Cloudbleed“. I’ll not explain in details the vulnerability here, there are already multiple reviews of this incident.

According to Cloudflare, the impact is the following:

This included HTTP headers, chunks of POST data (perhaps containing passwords), JSON for API calls, URI parameters, cookies and other sensitive information used for authentication (such as API keys and OAuth tokens).

A lot of interesting data could be disclosed so my biggest concern was: “Am I affected by Cloudbleed?” Cloudflare being a key player on the Internet, chances to visit websites protected by their services are very high. How to make an inventory of those websites? The idea is to use Splunk to achieve this: If your DNS resolvers logs are indexed by Splunk, you can use a lookup table to search for IP addresses belonging to Cloudflare.

Cloudflare is transparent and publicly announces the IP subnets they use (both IPv4 & IPv6). By default, Splunk does not perform lookups in CIDR directly. I created the complete list of IP addresses with a few lines of Python:

#!/usr/bin/python
# IP Sources:
# https://www.cloudflare.com/ips/
from netaddr import IPNetwork
cidrs = [
  '103.21.244.0/22', '103.22.200.0/22', '103.31.4.0/22', '104.16.0.0/12',
  '108.162.192.0/18', '131.0.72.0/22', '141.101.64.0/18', '162.158.0.0/15',
  '172.64.0.0/13', '173.245.48.0/20', '188.114.96.0/20', '190.93.240.0/20',
  '197.234.240.0/22', '198.41.128.0/17', '199.27.128.0/21' ]
for cidr in cidrs:
  for ip in IPNetwork(cidr):
    print '%s' % ip

The generated file can now be imported as a lookup table in Splunk. My DNS requests are logged through a Bro instance. Using the following query, I extracted URLs that are resolved with a Cloudflare IP address:

sourcetype=bro_dns rcode=A NOT qclass = "*.cloudflare.com" |
lookup cloudflare.csv TTLs OUTPUT TTLs as ip |
search ip="*" |
dedup qclass |
table qclass

(The query is very easy to adapt to your own environment.)

For the last 6 months, I got a list of 158 websites. The last step is manual: review the URLs and if you’ve accounts or posted sensitive information with them, it’s time to change your passwords / API keys!

[The post Am I Affected by Cloudbleed? has been first published on /dev/random]

Frank Goossens: Autoptimize CSS defer switching to loadCSS (soon)

$
0
0

Historically Autoptimize used its own JS-implementation to defer the loading of the main CSS, hooking into the domContentLoaded event and this has worked fine. I knew about Filament Group’s loadCSS, but saw no urgent reason to implement it as I saw no big advantages vs. my homegrown solution. That changed when criticalcss.com’s Jonas contacted me, pointing out that the best way to load CSS is now using the rel="preload" approach, which as of loadCSS 1.3 is also the way loadCSS works;

<link rel="preload" href="path/to/mystylesheet.css" as="style" onload="this.rel='stylesheet'">

As rel="preload" currently is only supported by Chrome & Opera (both Blink-based), a JS polyfill is needed for other browsers which uses loadCSS to load the CSS. Hopefully other browsers catch up on rel="preload" because it is a very elegant solution which allows the CSS to load sooner then with the old code while still being non-render blocking. What more could one which for (“Unicorns” my 10yo daughter might say, but what does she know)?

Anyways; I have integrated this new approach in a separate branch on GitHub, you can download the zip-file here to test this and all the other fixes and improvements since 2.1.0. Let me know what you think. Happy preloading!

Viewing all 4959 articles
Browse latest View live


Latest Images