Archive for the ‘Articles’ Category

SSH-over-HTTPS for fame & profit

In the past, I’ve discussed using SSH to circumvent restricted networks with censoring transparent proxies, but that relied on the restricted network allowing free SSH access on port 22 (what we call in the industry – the single network requirement for getting work done).

Unfortunately, there are restricted networks that don’t even allow that – all you get is the transparent censoring HTTP proxy (which has recently became the case with the free Wi-Fi on the Israeil Railways trains).

But fortunately for us, there is still one protocol which they can’t block, they can’t proxy and they can’t man-in-the-middle  – or else they’d break the internet even for people who only read news, search google and watch YouTube – that is HTTPS.

In this article I’ll cover running SSH-over-HTTPS using ProxyTunnel and Apache. The main consideration is that the target web server is also running some other websites that we can’t interrupt. The main content is based on this article by Mark S. Kolich, but since it only covers using plain HTTP and in addition to some simple changes in the example configurations I also wanted to cover getting an SSL certificate, here’s my version of the tutorial:

(more…)

Polymer Runtime Application Configuration

When creating web applications, we often need to have some parts of the application configuration for the deployment environment – usually web service API endpoints have different URLs in different environments, such as using a production web service in production and a locally hosted web service during development.

Such a feature is implemented in many web frameworks and building tools for web applications, such as Gulp or Grunt. Unfortunately, when building applications using Google’s Polymer SDK, there are no such features available – reviewing the Polymer documentation one, there isn’t even any mention of how one handles such mundane tasks as configuring API URLs, except hard-coding them1.

Developers have tried to solve this problem in different ways, from adding “environments” feature for Polymer’s internal build tool; abusing “behavior modules”; or using “app globals” custom element with complex code to share application-level state. None of these features work well or elegantly (except maybe the environments feature, if it ever gets implemented).

Here is the solution I came up with – with many thanks to Daniel Tse that described part of the implementation in this article – just using the core Polymer elements iron-ajax and iron-meta and without any custom code. Its not the most elegant thing that can be done, but it is relatively simple and works well. Its main down-side is that the application configuration is not embedded in the application during build time but loaded from an external file when the application loads – this may even be a required feature in some scenarios but its not the generally accepted practice.

(more…)

  1. all iron-ajax examples, as an obvious example, use hard-coded URLs []

Googlephobia Paints The World Red

This is an open letter to Chris Fisher from Jupiter Broadcasting (and friends) regarding the recent tirade against Google “winning” the court battle against Oracle for the use of the Java APIs.

A short summary for the uninitiated:

After Oracle bought Sun including their Java implementation, they sued Google who implemented (some of) the Java APIs for use in the Android operating system, for copyright infringement in some source code, copyright infringement on the API definitions themselves and a couple of software patents they held about how to implement some Java behavior. Round one: Some source code was ruled infringing, APIs were found non-copyrightable  and patents were found not-infringing. Round two: A federal court (that normally rules on patent issues) held the ruling of copying (for Oracle) and patents (for Google) but ruled APIs copyrightable and Google infringing on that. Third round: a jury found that Google’s use of the Java APIs was fair-use and no damages should be awarded.

After the last jury decision, there was a lot of back and forth on the internet, notably one Ars Technica article (“op-ed”), by an Oracle lawyer claimed that the result boils down to nullifying any and all open source licenses:

if you offer your software on an open and free basis, any use is fair use.

Then we come to Chris Fisher – as the host of his Linux Action Show podcast, he has spoke out against Google many times in the past, but this tirade in the discussion of the Oracle vs. Google action in the most recent Linux Action Show #419, really demonstrates well the extents of his Googlephobia (LAS #419, 0:46:42):

(more…)

Script day: persistent memoize in bash

One type of task that I often find myself implementing as a bash script, is to periodically generate some data and display or operate on it – maybe through a cron job, watch or simply a loop. Sometimes part of the process is an expensive computation (could be network based, IO intensive or simply subject to throttling by another entity). The way to deal with issues like that in modern programming languages is a caching technique known as “memoization” (based on the word “memorandum”) in the results of an expensive call is retained in memory after the first time, and returned for future calls instead of running the expensive calculation. We also need to clear the cache every once in a while, but that’s another issue.

So, how to implement in bash?

(more…)

Script Day: Cloud-init for MS-Windows, The Poor Man’s Version

Cloud-init is a Linux technology that allows easy setup and automation of virtual machines. The concept is very simple – the VM infrastructure provides some way of setting some custom data for each virtual machine (many providers call this “user data”), and when the operating system starts the cloud-init service reads that configuration, loads a bunch of modules to handle various parts and let them configure the system. As a user it is very convenient – you write a setup scenario using the variety of tools offered by cloud-init, you can store the scenario in a source control to allow to develop the scenario further, then just launch a bunch of machines with the specified scenario and watch them configure themselves.

The situation is much worse on the MS-Windows side of the fence: want to have an MS-Windows server configured and ready to go? Start a virtual machine, connect to is using RDP and Next, Next, Finish until your fingers are sore. Need to deploy a new version? either retrofit an existing image (again, manually) and risk deployment side effects, or do the whole process again from scratch.

Here’s a script to try to help a bit with the problem – at least on Amazon Web Services: a poor man’s cloud-init-like for MS-Windows server automation.

(more…)

Fix another ‘curl|sh’ bogus installation – Heroku

The Heroku toolbelt (which I don’t remember if its mentioned in the “curlpipesh” tumbler mentioned by Amir in his response to my “Fix RVM” post), is a CLI to manage applications on the Heroku PaaS platform. As is common (and horrible) in this day and age they also offer a ‘curl|sh’ type install on their home page.

While the Debian/Ubuntu specific installer is not entirely horrible – it basically adds the Heroku Toolbelt debian repository to the APT source list, updates the package list and installs it, the “standalone” version is as horrible as it can get: download an unsigned binary from the internet, get root permissions and then do something.

For users of Fedora and other distributions, or just Ubuntu users who don’t like installing external repositories on their system, here is a simpler method to get the Heroku Toolbelt running on your system without root permissions and downloading scripts off the internet:

(more…)

How to circumvent the free Wi-Fi content filter, for fame & profit

I’m very grateful for the free Wi-Fi on the train, the coffee shop or the municipal free Wi-Fi, but the content filter they have on their proxies is sometimes really weird – for example it may blocks one of my favorite podcasts website (the Jupiter Broadcasting network) under the category “streaming media” even though they don’t actually host their video files, but they do let through YouTube and Facebook (where most cat videos are posted these days). So apparently Israeli Rail has an aversion to streaming media so they won’t let me send an email to a small podcast, but I can watch all the cat videos I want. Weird. Also, most VPN services are blocked by default, so no help will be coming from that way1.

So, to fix that, here’s a small workaround using an external proxy – this is rather simple, but it does assume you have all kinds of tools that most users won’t have just lying around – but if you’re a Linux geek you should do just fine.

(more…)

  1. I’ve checked the OpenVPN ports are blocked, as well as all web-based proxies I could find, such as FoxyProxy and Hola. I’ve encountered in the past a weird VPN software that does not use standard UDP or TCP sockets, but instead using GRE packets and I have no idea if that would work, but I’m assuming it won’t as well. []

Fix RVM “run script from the internet to install”

On Wednesday I complained about the latest UN*X fad of installing software by running scripts from the internet, without any regard to how your operating system handles software installation.

Docker, that I complained about last time, at least has a script that takes into account the local software management solution (uses apt for Ubuntu, yum for Fedora, etc), but RVM – the Ruby Version Manager which is a popular tool among rubyists everywhere, just downloads a bunch of executable stuff (granted, most of it are scripts, but the difference is lost on most people) into arbitrary location on your file system. At least it doesn’t install system software, oh wait – it does.

While I can’t help with RVM’s desire to install system level software (that it actually needs because one of the things you want RVM to do for you is to compile ruby versions from source), I can try to help you figure out how to install RVM where you want it and use it how you want it.

(more…)

Docker and the horrible “one line installation” fad

One of the weird things that sane (or some would say “old skool”) system administrators complain about lately is that with the rising popularity of UN*X systems (mostly Mac OS X and Linux) in the world, and in particular in the software development world, people using UN*X system want less and less to understand how to manage their systems and the culmination is the

to install this complicated system level software, just copy and paste simple wget command to your terminal

with Docker being the most horrible example of that behavior. No sane person (who understand UN*X) will ever think that installing Docker by feeding the content of a URL to bash is a good idea, but for some reason this is the documented and recommended way by the Docker people. Other examples are abound, but lets concentrate on fixing the Docker scenario.

(more…)

Microsoft open-sourced MSBuild

The Microsoft initiative to open source the .Net platform (which the MSBuild tool is a part of) has been talked about a lot in the past (though I have something to say about this as well, probably later in this post), but the fanfare has died down quite a bit since the last announcement. One might say that the reason they didn’t open source the entire thing at once was so Microsoft can space out the announcement and synthetically generate continued buzz about their platform, but knowing how these things usually work, its much more likely that because preparing a project for open-source is difficult and time consuming and a project as large as .Net doubly so (or a thousand times so), so it makes sense to do so in parts.

But to the question at hand – what does an open source MSBuild means to you? (more…)