SSH-over-HTTPS for fame & profit

April 18th, 2017

In the past, I’ve discussed using SSH to circumvent restricted networks with censoring transparent proxies, but that relied on the restricted network allowing free SSH access on port 22 (what we call in the industry – the single network requirement for getting work done).

Unfortunately, there are restricted networks that don’t even allow that – all you get is the transparent censoring HTTP proxy (which has recently became the case with the free Wi-Fi on the Israeil Railways trains).

But fortunately for us, there is still one protocol which they can’t block, they can’t proxy and they can’t man-in-the-middle  – or else they’d break the internet even for people who only read news, search google and watch YouTube – that is HTTPS.

In this article I’ll cover running SSH-over-HTTPS using ProxyTunnel and Apache. The main consideration is that the target web server is also running some other websites that we can’t interrupt. The main content is based on this article by Mark S. Kolich, but since it only covers using plain HTTP and in addition to some simple changes in the example configurations I also wanted to cover getting an SSL certificate, here’s my version of the tutorial:

Read the rest of this entry »

Is Precise Canonical’s XP?

March 15th, 2017

Canonical, makers of the Ubuntu operating system, have just announced that their about to expire “long term support” version is getting a longer “security only” life extension.

Sounds familiar?

Like other vendors who have similarly offered such life extensions in the past, the new support contract will only be offered to corporations who subscribe to the pricey commercial support package (at $250/year per VM and $750/year per physical server).

Read the rest of this entry »

The GitLab system crash and what can we learn from that

February 1st, 2017

Yesterday night, GitLab’s hosted service (gitlab.com) suffered a database crash and the service went down for a day(1).

I’m not going to discuss the technicalities of the down-time (which is covered extensively in the blog post linked above), except to note that “shit happens” – my main take-aways from that are basically two:

  1. Don’t let tired people handle critical system issues, and if you ever find yourself juggling the third production issue at midnight after a full day of work – just say: “no, I’m not going to fix this – some one else must step in or we leave the system down for tomorrow”.
  2. The GitLab process for handling the failure was nothing short of amazing, and they deserve all the kudus for that: After figuring out how deep in shit they are, and posting a “sorry we’re down” page on the main web site, they:

I think this should be the standard from now on how to handle system crashes on your public facing application – 1000% transparency should be how these things are handled if you have any hope of recovering the trust of the community in your service.


  1. at the time of writing, the service is still not up, but its not yet even 24 hours since the crash happened []

Polymer Runtime Application Configuration

January 18th, 2017

When creating web applications, we often need to have some parts of the application configuration for the deployment environment – usually web service API endpoints have different URLs in different environments, such as using a production web service in production and a locally hosted web service during development.

Such a feature is implemented in many web frameworks and building tools for web applications, such as Gulp or Grunt. Unfortunately, when building applications using Google’s Polymer SDK, there are no such features available – reviewing the Polymer documentation one, there isn’t even any mention of how one handles such mundane tasks as configuring API URLs, except hard-coding them(1).

Developers have tried to solve this problem in different ways, from adding “environments” feature for Polymer’s internal build tool; abusing “behavior modules”; or using “app globals” custom element with complex code to share application-level state. None of these features work well or elegantly (except maybe the environments feature, if it ever gets implemented).

Here is the solution I came up with – with many thanks to Daniel Tse that described part of the implementation in this article – just using the core Polymer elements iron-ajax and iron-meta and without any custom code. Its not the most elegant thing that can be done, but it is relatively simple and works well. Its main down-side is that the application configuration is not embedded in the application during build time but loaded from an external file when the application loads – this may even be a required feature in some scenarios but its not the generally accepted practice.

Read the rest of this entry »


  1. all iron-ajax examples, as an obvious example, use hard-coded URLs []

מתכון: קציצות עוף וסלרי ברוטב סלרי חמצמץ

January 13th, 2017

מסתבר שאין לי פה מתכון לקציצות עוף, סוג של קציצות שאני די מחבב, אז הנה משהו שערבבתי לי לאחרונה. הרוטב היא קצת חמוץ וכדי לאזן את זה – גם קצת מתוק, אז תשימו לב. בנוסף הכמויות הן לדי הרבה אוכל – אני מכין בד”כ לשלושה אנשים ואוהב שנשאר לקחת לארוחת צהריים במשרד, אז קחו את זה בחשבון.

Read the rest of this entry »

Googlephobia Paints The World Red

June 4th, 2016

This is an open letter to Chris Fisher from Jupiter Broadcasting (and friends) regarding the recent tirade against Google “winning” the court battle against Oracle for the use of the Java APIs.

A short summary for the uninitiated:

After Oracle bought Sun including their Java implementation, they sued Google who implemented (some of) the Java APIs for use in the Android operating system, for copyright infringement in some source code, copyright infringement on the API definitions themselves and a couple of software patents they held about how to implement some Java behavior. Round one: Some source code was ruled infringing, APIs were found non-copyrightable  and patents were found not-infringing. Round two: A federal court (that normally rules on patent issues) held the ruling of copying (for Oracle) and patents (for Google) but ruled APIs copyrightable and Google infringing on that. Third round: a jury found that Google’s use of the Java APIs was fair-use and no damages should be awarded.

After the last jury decision, there was a lot of back and forth on the internet, notably one Ars Technica article (“op-ed”), by an Oracle lawyer claimed that the result boils down to nullifying any and all open source licenses:

if you offer your software on an open and free basis, any use is fair use.

Then we come to Chris Fisher – as the host of his Linux Action Show podcast, he has spoke out against Google many times in the past, but this tirade in the discussion of the Oracle vs. Google action in the most recent Linux Action Show #419, really demonstrates well the extents of his Googlephobia (LAS #419, 0:46:42):

Read the rest of this entry »

Script day: persistent memoize in bash

September 10th, 2015

One type of task that I often find myself implementing as a bash script, is to periodically generate some data and display or operate on it – maybe through a cron job, watch or simply a loop. Sometimes part of the process is an expensive computation (could be network based, IO intensive or simply subject to throttling by another entity). The way to deal with issues like that in modern programming languages is a caching technique known as “memoization” (based on the word “memorandum”) in the results of an expensive call is retained in memory after the first time, and returned for future calls instead of running the expensive calculation. We also need to clear the cache every once in a while, but that’s another issue.

So, how to implement in bash?

Read the rest of this entry »

Script Day: Cloud-init for MS-Windows, The Poor Man’s Version

August 20th, 2015

Cloud-init is a Linux technology that allows easy setup and automation of virtual machines. The concept is very simple – the VM infrastructure provides some way of setting some custom data for each virtual machine (many providers call this “user data”), and when the operating system starts the cloud-init service reads that configuration, loads a bunch of modules to handle various parts and let them configure the system. As a user it is very convenient – you write a setup scenario using the variety of tools offered by cloud-init, you can store the scenario in a source control to allow to develop the scenario further, then just launch a bunch of machines with the specified scenario and watch them configure themselves.

The situation is much worse on the MS-Windows side of the fence: want to have an MS-Windows server configured and ready to go? Start a virtual machine, connect to is using RDP and Next, Next, Finish until your fingers are sore. Need to deploy a new version? either retrofit an existing image (again, manually) and risk deployment side effects, or do the whole process again from scratch.

Here’s a script to try to help a bit with the problem – at least on Amazon Web Services: a poor man’s cloud-init-like for MS-Windows server automation.

Read the rest of this entry »

Software recommendation: MongoDB UI – Robomongo

June 21st, 2015

I’ve been looking a while now for a good way to manage data in MongoDB database servers. The command line tool is OK, but its not really easy to work with, even doing simple stuff like listing the content of a collection.

For a while I’ve been using Genghis, which has an interesting approach to a local graphical interface: it starts a local web server and invokes a web browser to access that server. The UI is pretty nice and very responsive, but it is kind of flaky and lately has stopped working for me entirely. Also the data density you can get from a nice looking web UI is still much lower then you can get in a well designed desktop UI – and when looking at a lot of data records in a database, data density is very important.

Enter Robomongo:

Read the rest of this entry »

Fix another ‘curl|sh’ bogus installation – Heroku

June 19th, 2015

The Heroku toolbelt (which I don’t remember if its mentioned in the “curlpipesh” tumbler mentioned by Amir in his response to my “Fix RVM” post), is a CLI to manage applications on the Heroku PaaS platform. As is common (and horrible) in this day and age they also offer a ‘curl|sh’ type install on their home page.

While the Debian/Ubuntu specific installer is not entirely horrible – it basically adds the Heroku Toolbelt debian repository to the APT source list, updates the package list and installs it, the “standalone” version is as horrible as it can get: download an unsigned binary from the internet, get root permissions and then do something.

For users of Fedora and other distributions, or just Ubuntu users who don’t like installing external repositories on their system, here is a simpler method to get the Heroku Toolbelt running on your system without root permissions and downloading scripts off the internet:

Read the rest of this entry »


Spam prevention powered by Akismet