The Microsoft initiative to open source the .Net platform (which the MSBuild tool is a part of) has been talked about a lot in the past (though I have something to say about this as well, probably later in this post), but the fanfare has died down quite a bit since the last announcement. One might say that the reason they didn’t open source the entire thing at once was so Microsoft can space out the announcement and synthetically generate continued buzz about their platform, but knowing how these things usually work, its much more likely that because preparing a project for open-source is difficult and time consuming and a project as large as .Net doubly so (or a thousand times so), so it makes sense to do so in parts.
Archive for the ‘Articles’ Category
Well, at least until the entire world uses high DPI screens. Lets see an example:
This is a “call to action” effect on a button – it pulses slightly to get attention. This has proven to be really effective at improving “conversion” (web term to “getting you to do that thing I want you to do”). But even if you are not a designer at heart its easy to see that the text in the button is pulsing at different speeds and this creates a really jarring effect.
Every now and then, when discussing EVs (electric vehicles) or other alternative fuel cars(1), that offer basically zero pollution (in the form of green house gases and other poisonous gases) someone always raises the argument that these cars just “move the pollution upstream” to the electric power plants and you are still polluting just the same.
But this argument only works if the power production “upstream” really is polluting as much as an onboard gas engine – which really sounds weird to me, because as you increase the success of the engine – from something that had to be small enough to fit in a car to something that basically only had the limit of required to be cost effective – surely you can have large efficiency gains, even if burning the same fuel?
So here are some numbers:
- like the new “hydrogen powered” cars [↩]
As anyone who works with the Amazon Web Services API knows, when you submit requests to an AWS service you need to sign the request with your secret key – in order to authenticate your account. The AWS signing process has changed through the years – an earlier version (I think version 1) I implemented in a previous blog post: upload files to Amazon S3 using Bash, with new APIs and newer versions of existing APIs opt in to use the newer signing process.
The current most up to date version of the signing process is known as Signature Version 4 Signing Process and is quite complex, but recently I had the need to use an AWS API that requires requests to be signed using the version 4 process in a bash script(1), so it was time to dust off the old scripting skills and see if I can get this much much much more elaborate signing process to work in bash – and (maybe) surprisingly it is quite doable.
With no further ado, here is the code:
[This is mostly a summary of a discussion on Google Plus, that you can find here]
Recap: The world (or at least clueless tech journalists) was surprised to learn (once they bothered to look it up) that Microsoft will not extends Windows 7 end of “mainstream support” – which is scheduled for January 2015 (about 6 months from now). This was all planned way in advance – Microsoft basically committed to end “mainstream support” in 2015 by not releasing any service pack for Windows 7 since the beginning of 2010, instead they want people to move to the next version of their software. In most normal software markets, this is a no-brainer – who have heard of a Macintosh user still clinging to Mac OS X 10.7? or an Adobe Photoshop user who refuses to upgrade past CS3? But instead you now hear calls for Microsoft to extends Windows 7 an artificial life line, like it did with XP.
And here’s why XP will never happen again:
There is one thing that really troubled me about the Microsoft dynamic DNS fiasco that no one seems to talk about, which I really wanted to raise, but first here’s a short recap for those not in the know: Microsoft “cyber-security” department convinced a US federal court to issue an order to transfer 22 internet domains owned by the popular No-IP dynamic DNS service into their custody, in an attempt to remove specific hosts under those domains that are supposedly used as malware control centers.
The issue I have is very simple – under what conditions can it be possible for a private company, to ask a court to transfer ownership of a property from another private company? This sounds seriously like private policing and somehow it is endorsed by the judicial system ?!? Under what authority can something like this be allowed?
This situation is massively more grievous because the court order was given “ex-parte” – legalese for “without the other party appearing to defend itself”, but even if everything was over the table and in the clear, and the defending lawyer incredibly incompetent, what kind of argument a private entity can offer to get a court to simply transfer control of another private entity?(1).
- Microsoft Cybercrime Shutdown Hit Users Says DDNS Provider(techweekeurope.co.uk)
- No-IP regains control of some domains wrested by Microsoft(pcworld.com)
- Microsoft’s “draconian” No-IP takedown hits millions(pcpro.co.uk)
- Millions of dynamic DNS users suffer after Microsoft seizes No-IP domains(arstechnica.com)
- except obviously arguing that the property was stolen, which is clearly not the case [↩]
Code Spaces break-in lessons: using your infrastructure provider for backup is a single point of failureMonday, June 30th, 2014
Summary of the events of the Code Space break-in: Code Spaces was hosting their services on Amazon Web Services VPS infrastructure. An attacker managed to gain access to their AWS administration console account and after his demands for ransom were not answered, proceeded to delete all the data in the account.
The disaster recovery plan for Code Spaces was based on having machine images and data backups stored in AWS, everything was gone, and Code Spaces basically had to shutdown.
Ok, this is not a real solution for all types of problems – just a tip, that worked for me today, to try out if you can’t figure out what the problem is.
I’m running a VM on Amazon EC2, and looking at
top, I saw that most of the CPU time was spent either in “
steal/guest” or “
steal/guest is kernel speak for “I wanted to allocate some CPU time for progams, but the hypervisor stole it” – which is not surprising on a a virtualization solution, but if it happens all the time then that means that your physical host is constantly loaded by other VMs that take as much CPU time as they can. The second item “
IRQ” is time the kernel spends at handling interrupt requests from the hardware. This shouldn’t consume a significant amount of time unless the hardware has a problem – another good indication that you want to move your VPS to another physical host.
I use SSH daily to work with different remote services, and its always a very straight-forward process… unless the remote server you want to work with is on LAN somewhere behind NAT(1). When you need to access such an internal server, the only option is to SSH into the firewall(2), and then SSH again to your server of choice.
But there’s a better way, and you don’t even have to fiddle with the firewall server!