Saturday, December 14, 2013

Best budget Android phone

I have had an HTC Desire for about 3 and a half years. In July 2010, it initially came loaded with Android 2.1 (I seem to remember). About a year after that, I rooted with CyanogenMod Rom 7. However, I really missed the HTC Rosie dialler, which allowed to dial and search by name or number. So I put HTCs Sense back on, and thats pretty much how I kept it since then. 

Recently, the device has been rebooting when it gets hot (using GPS). It then could not boot, and I had to go into Recovery mode to restore an old backup of Sense. So it was time to look for a new device. But after 3 years, I was very happy with the HTC Desire.

I was looking for a budget (read cheap) phone. Spending R6000 an above just for a phone seems ludicrous, especially since its bound to fall and get damaged. I also wanted a phone for my wife, so I needed two affordable phones.

I had seen the Samsung Galaxy Pocket in Jet, Ackermans and other stores, for about R800. It ran Android 2.2, and seemed OK. However, I decided to get a Galaxy Pocket Plus - S5301. Dion Wired were selling it for about R900. It ran Android 4.0.4, on 4GB RAM with a 850 MHz CPU. Nothing special in terms of specs, but it had Android 4.x. 

Perhaps there are other cheaper, non-branded phones from China that have Android 4.x, but I did not see any. There is also a Huawei phone, but I think it was more expensive. 

After having it for a few weeks, I don't have any complaints. Yes, it does lag sometimes, but that is to be expected considering the CPU (even the HTC Desire had a 1GHz CPU).  Most apps run on it, with a notable exception of Google Now (only > 4.1). And I don't really take pictures, so I don't mind the camera.

This is my first Samsung device, after a run of three HTCs in a row (TyTn II, G1, Desire). I can honestly say that HTC has a superior feel and experience to its devices. Samsung does not even offer a dialler like Rosie? I'm sure the HTC One range knocks the socks off the Galaxy S3/S4.

A recent article highlighted the Vodafone Smart Mini, which has a 1GHz CPU and runs Android 4.1, for only R799.

Thursday, November 28, 2013

Cracking SIP Digest passwords

I needed to check if SIP passwords that was provisioned on a device was correct, using just wireshark traces to get the different values that was sent between the device and the server.

SIP uses MD5 Digest Authentication, where the password never crosses the wire in clear text. Rather, the password is hashed together with other values, and then the server compares the hash to its own hash computation, to see if they are the same. If they are, then the device must have the correct password.

When a device tries to REGISTER, the server will challenge it with a nonce. The nonce is a one-time key, meant to prevent replay attacks. The device then needs to compute with following:

HA1 = MD5(“myusername:asterisk:password”)
HA2 = MD5(“”)
response = MD5(HA1+”:40787c47:”+HA2);

The device will send the response value to the server for comparison.

In my case, to make sure that the hashes were computed correctly, I checked them like this:

yusufmayet@yusuf-HP-Desktop:~$ echo -n "" | md5sum
d1c23d485345435e4c9c0bf63  -
Which gave me HA1

yusufmayet@yusuf-HP-Desktop:~$ echo -n "" | md5sum
4f20c9ab63453457e44696a3  -
Which gave me HA2

yusufmayet@yusuf-HP-Desktop:~$ echo -n "d1c23d485345435e4c9c0bf63:2d6eb162513045769f23705546gf
0000005297058d:4f20c9ab63453457e44696a3" | md5sum

2263f03dd182ff0c799c97a3c581e534  -
Which gave me response 

I could then compare the response values to make sure that it matches, and therefore the correct password was used.

Another way it to this node application to calculate the password, based on the above values. Once you have node.js installed, simply calling this app with node and the correct values, will generate the password. In my case, because the password contained numbers, I had to include numbers in the lower_case array in the application code.

Friday, September 27, 2013

Desktop setup

My previous post spoke about using Synergy as a software-based KVM. But to really see how useful it is, lets see how it fits into the bigger picture. This is my desktop at the moment:

Both the laptop and the desktop are connected to the big screen - one via VGA, the other via DVI.  I use the one mouse and keyboard to control both environments, using Synergy.  Most of the time, Ubuntu is on the big screen. However, if I need to use the big screen for Windows as an extended desktop, I can quickly switch the screen from the VGA to the DVI input. 

I keep files in sync between Windows and Linux using Unison. This is a two way sync, allowing both the environments to see my latest file changes. Between the MacBook Pro and Linux I use BitTorrent Sync, which is a private Dropbox replacement. I also setup TimeMachine on the MBP to backup to an external HDD plugged into Linux

This is pretty much an almost exact copy of the desktop setup I used about 4 years ago. I dont know how long I will be using this setup though - I might soon switch to using a Mac. Maybe one day when I grow up, perhaps I can get a 27'' iMac, which one of the guys here uses:

Wednesday, September 18, 2013

Synergy - software/network based KVM

Synergy is a application that allows you to share one mouse and keyboard with other PCs, similar to a KVM switch.  Typically a person will have a laptop and one or more desktops at his desk - so instead of keeping a separate keyboard/mouse for each, you can just use one to control all of them.

How Synergy works.

I use synergy running on my laptop (Windows 7) to control my desktop (Ubuntu 13.04) - all with the built-in mouse an keyboard on the laptop.

They launch at startup. However on Ubuntu, I created an entry in Startup Applications. However, this will only launch once the user is logged in. To launch Synergy earlier, you can use this recipe (I have not tested it yet).

Synergy requires an X server. That means a server must be running and synergy must be authorized to connect to that server. It's best to have the display manager start synergy. You'll need the necessary (probably root) permission to modify the display manager configuration files. If you don't have that permission you can start synergy after logging in via the .xsession file.
Typically, you need to edit three script files. The first file will start synergy before a user logs in, the second will kill that copy of synergy, and the third will start it again after the user logs in.
The contents of the scripts varies greatly between systems so there's no one definite place where you should insert your edits. However, these scripts often exit before reaching the bottom so put the edits near the top of the script.
The location and names of these files depend on the operating system and display manager you're using. A good guess for the location is /etc/X11. If you use kdm then try looking in /etc/kde3 or/usr/kde/version/share/config. Typical file names are:
    xdm  kdm  gdm
1xdm/Xsetupkdm/Xsetupgdm/Init/Default (*)
2xdm/Xstartupkdm/Xstartupgdm/PostLogin/Default (*)
3xdm/Xsessionkdm/Xsessiongdm/Sessions/Default (*, **)
*) The Default file is used if no other suitable file is found. gdm will try displayname (e.g. :0) and hostname (e.g. somehost), in that order, before and instead of Default.
**) gdm may use gdm/Xsessionxdm/Xsession or dm/Xsession if gdm/Sessions/Default doesn't exist.
For a synergy client, add the following to the first file: /usr/bin/killall synergyc sleep 1 /usr/bin/synergyc [<options>] synergy-server-hostname Of course, the path to synergyc depends on where you installed it so adjust as necessary.
Add to the second file: /usr/bin/killall synergyc sleep 1
And to the third file: /usr/bin/killall synergyc sleep 1 /usr/bin/synergyc [<options>] synergy-server-hostname Note that <options> must not include -f or --no-daemon or the script will never exit and you won't be able to log in.
The changes are the same for the synergy server except replace synergyc with synergys and use the appropriate synergys command line options. Note that the first script is run as root so synergys will look for the configuration file in root's home directory then in /etc. Make sure it exists in one of those places or use the --config config-pathname option to specify its location.
Note that some display managers (xdm and kdm, but not gdm) grab the keyboard and do not release it until the user logs in for security reasons. This prevents a synergy server from sharing the mouse and keyboard until the user logs in. It doesn't prevent a synergy client from synthesizing mouse and keyboard input, though.
If you're configuring synergy to start only after you log in then edit your .xsession file. Add just what you would add to the third file above.

Monday, September 16, 2013

Remote SSH sessions: screen

This is one of the most usefull things I have come across: when working via SSH/telnet on remote hosts, you can get disconnected for any reason: timeouts, connection drops, etc. You may have been in the middle of running a script, and now you won't know if the script finished. You can use 'screen' to overcome and simply reconnect where you left of. 

This link says it better:

Even tho this is not a direct answer to your question, it is highly related to the problem you have. Instead of trying to keep the connection alive (all connections eventually die) you can use terminal multiplexors, like screen and tmux that keep the session alive in the background even if your terminal gets disconnected.
Essentially when you login in to the SSH server you immediately run screen which will create and attach a new session:
$ screen
Then you go ahead and do your work with the shell as you would normally do. Now if the connection gets dropped, when you can get back online and reconnect to the server over SSH, you get a list the current sessions with:
$ screen -ls
To reattach to a session:
$ screen -r <session>
where <session> is the PID or a session name. You will be reconnected to your session and you can continue from where you left off!
You can even detach the session and reconnect from home to pick up from the exact point where you left off. To detach the session you use C-a followed by C-d (thats Control + A and then Control + D).
There is simple online tutorial as well.
Using screen and tmux on remote servers is considered a best practice and is highly recommended. Some people go as far as to have screen as their default login shell, so when they connect they immediately start a new screen session.

Now if you dont want to have to remember to  screen when you login to a remote box, you can automate it with these scripts:

I'm still playing around with it to see if it works for me.

Thursday, July 25, 2013

WebRTC: sipML5, Asterisk and Chrome

I got a quick WebRTC setup working. I used the sipML5 demo setup, so I was not required to do any Javascript coding. Mostly it was simple config of Asterisk, Apache, and a soft phone.

I followed these two guides:

I got audio calls working between Chrome (using a webpage with sipML5) and a soft-phone, and between two Chromes (Windows and Mac).
But there were some problems: one way audio, or no audio after been put on hold, etc.

There are some limitations to this setup:
  • Chrome only supports G711
  • sipML5 does seem to do some transcoding, but I am not sure in which scenarios
  • Asterisk does not support the VP8 video codec
  • I think some of the no-audio calls was caused by some SRTP issues (errors thrown on Asterisk CLI)
I think this is how it works:
  • The browser talks to the sipML5 media stack
  • simML5 talks SIP over Websocket to the built-in Asterisk HTTP server
  • Asterisk then talks normal SIP to any soft-phones

This is how an INVITE looks from simML5 towards Asterisk:

INVITE sip:8000@ SIP/2.0

Via: SIP/2.0/WS df7jal23ls0d.invalid;branch=z9hG4bKp6HJtQdMMTDMPbySBk9LTU3eskrWXPu4;rport
From: "Yusuf"<sip:8001@>;tag=0hwykfSs3NexZBU8tdJW
To: <sip:8000@>
Contact: "Yusuf"<sip:8001@df7jal23ls0d.invalid;rtcweb-breaker=no;click2call=no;transport=ws>;+g.oma.sip-im;;language="en,fr"
Call-ID: 1fd37734-e224-993d-0e39-4a9d0a29249d
CSeq: 64283 INVITE
Content-Type: application/sdp
Content-Length: 2055
Max-Forwards: 70
User-Agent: IM-client/OMA1.0 sipML5-v1.2013.07.17
Organization: Doubango Telecom

o=- 5377262258022282000 2 IN IP4
s=Doubango Telecom - chrome
t=0 0
a=group:BUNDLE audio
a=msid-semantic: WMS akLep2Ro5PJZMHZbFqARiPlU485AHVNwnnUK
m=audio 35291 RTP/SAVPF 111 103 104 0 8 107 106 105 13 126
c=IN IP4
a=rtcp:35291 IN IP4
a=candidate:221728540 1 udp 2113937151 55639 typ host generation 0
a=candidate:221728540 2 udp 2113937151 55639 typ host generation 0
a=candidate:2999745851 1 udp 2113937151 55640 typ host generation 0
a=candidate:2999745851 2 udp 2113937151 55640 typ host generation 0
a=candidate:2758328521 1 udp 1845501695 35291 typ srflx raddr rport 55639 generation 0
a=candidate:2758328521 2 udp 1845501695 35291 typ srflx raddr rport 55639 generation 0
a=candidate:1135916012 1 tcp 1509957375 0 typ host generation 0
a=candidate:1135916012 2 tcp 1509957375 0 typ host generation 0
a=candidate:4233069003 1 tcp 1509957375 0 typ host generation 0
a=candidate:4233069003 2 tcp 1509957375 0 typ host generation 0
a=extmap:1 urn:ietf:params:rtp-hdrext:ssrc-audio-level
a=crypto:0 AES_CM_128_HMAC_SHA1_32 inline:KAECDDgcHP+us1cLDMQnMU7r7tPUGp20csN2YXpx
a=crypto:1 AES_CM_128_HMAC_SHA1_80 inline:ZcVN+IBStqIqojpkJ9yHcw7lsz/w0zniMfyqg8LH
a=rtpmap:111 opus/48000/2
a=fmtp:111 minptime=10
a=rtpmap:103 ISAC/16000
a=rtpmap:104 ISAC/32000
a=rtpmap:0 PCMU/8000
a=rtpmap:8 PCMA/8000
a=rtpmap:107 CN/48000
a=rtpmap:106 CN/32000
a=rtpmap:105 CN/16000
a=rtpmap:13 CN/8000
a=rtpmap:126 telephone-event/8000
a=ssrc:2411427001 cname:0uphoIzFFc4tBBDf
a=ssrc:2411427001 msid:akLep2Ro5PJZMHZbFqARiPlU485AHVNwnnUK akLep2Ro5PJZMHZbFqARiPlU485AHVNwnnUKa0
a=ssrc:2411427001 mslabel:akLep2Ro5PJZMHZbFqARiPlU485AHVNwnnUK
a=ssrc:2411427001 label:akLep2Ro5PJZMHZbFqARiPlU485AHVNwnnUKa0

Whats interesting is that you wont see this on the wire. Because websockets serves as a transport for SIP, you only seen an initial HTTP Upgrade request.

GET /ws HTTP/1.1
Upgrade: websocket
Connection: Upgrade
Sec-WebSocket-Protocol: sip
Pragma: no-cache
Cache-Control: no-cache
Sec-WebSocket-Key: JMjC0QBuN4z9XHAR4JWfHA==
Sec-WebSocket-Version: 13
Sec-WebSocket-Extensions: x-webkit-deflate-frame
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/28.0.1500.72 Safari/537.36
Cookie: __utma=96741151.1671289293.1374742717.1374742717.1374745812.2; __utmc=96741151; __utmz=96741151.1374742717.1.1.utmcsr=(direct)|utmccn=(direct)|utmcmd=(none)

Tuesday, July 16, 2013

SSL, Keys, Encryption: know just enough to make it work

I've been using the web for a while, but I've never really understood how encryption works. And after all these years, I don't really care. I don't want to know how it works - I just want to know little enough to get things to work. This is really just a note to myself, that I can read whenever I need to deal with encryption. Because every few years I have to figure it out all over again just to get something to work.
NOTE: this is not meant to be 100% technically correct. I'm going to confuse some terms, and mix some stuff up. But its OK, because I don't care. I just want it to be sufficiently correct so that I can get the thing to work next time.

I have used (or only realised I was using) encryption in these three main cases:
  1. HTTPS sites
  2. SSH  keys
  3. SIP over TLS
The most important part of how SSL ensures that the traffic is safe is by authentication. By authenticating that the server is valid, it can use that fact to encrypt the traffic between the two.

Small excerpt
Authentication and encryption are two intertwined technologies that help to insure that your data remains secure. Authentication is the process of insuring that both ends of the connection are in fact who they say they are. This applies not only to the entity trying to access a service (such as an end user) but to the entity providing the service, as well (such as a file server or Web site). Encryption helps to insure that the information within a session is not compromised. This includes not only reading the information within a data stream, but altering it, as well.
While authentication and encryption each has its own responsibilities in securing a communication session, maximum protection can only be achieved when the two are combined. For this reason, many security protocols contain both authentication and encryption specifications.

These are few terms that we need to introduce first (again, not 100% correct, its just my version)
  • SSL/TLS: I will be using these interchangeably. SSL is the predecessor of TLS. Both are used in browsers today. SSL and TLS run on top of TCP, and ensure the safety of the traffic.
  • Key:  this is the signature of something. Used to verify its identity. For this discussion, we are talking about asymmetric keys, which consists of the public and private keys.
  • Public Key: Used for verifying signatures and encryption  The public key of a site is distributed publicly. Clients use this to verify who the server is.
  • Private Key: Used for signing and decrypting  Private keys are never distributed, and are guarded very seriously.
  • PKI: This involves CAs, and how clients can verify certificates
  • Public Key Cryptography: Messages encrypted with the public key can only be decrypted with the associated private key. Conversely, messages signed with the private key and be verified with the public key.
  • RSA, DSA: a Key Exchange Algorithm
  • X.509: the standard for certificates.
  • Certificates contain public keys, and are signed by a CA.

SSL runs on top of TCP. This is how the protocol messages look like:

Ok, now lets see how these things come together in the different use cases:

HTTPS - Browser

Basically, the browser (client) sends a request to a website (server). The website responds with its certificate, which contains its public key. The browser can now verify that the website is really the server, and not someone else: The client is certain that the identity of the server is authentic. They agree on a key to encrypt the data, which itself is encrypted with the servers public key.
There is an element of trust. The browser trusts the CAs. So if a site says its certificate was signed by a particular CA, then once the browser verifies that fact (by checking the sites certificate was signed by the CA), it now trusts that the website is who it says it is. And that really is how SSL ensures safety: that the bank website that you are visiting is in fact the bank, and not an imposter trying to steal your credentials.

To get a publicly/CA signed certificate for a new website you want to host, you can go to Thawte or Verisign, and paste your CSR on the order form. Your CSR is generated on the server you want the certificate for. It will also generate an associated private key.
Once the CA verifies that you own the domain, they will sign the CSR with their private key, and issue you the signed certificate. You can now import this into your web server.
You can also use self-signed certificates, but browsers won't be able to verify the servers authenticity.

Mutual authentication can be used as well: where the server asks for the browsers certificate. This can be used e.g. to verify the person for access to a private site.


I needed access to an SSH server at work. The admins don't allow the use of username/passwords - only SSH keys are allowed. I generated a key pair on my local box, and sent them the public key. The public was put in the allowed-keys on the SSH server. When I SSH into that box, my SSH agent presents the private key. Thus only I can gain access to that box, but only my private key can decrypt messages sent with the public key.


TLS can be used to encrypt SIP messages. You can encounter SIP over TLS in two ways:

  • softphone to a SIP server: on your softphone, you can enable TLS, usually to a different port on the SIP server. The server will need a cert (self-signed will work) and private key. The softphone and server will then negotiate and encrypt the traffic.
  • SIP trunk: In this case, you are setting up a SIP over TLS trunk between two SIP servers (perhaps MS Lync and an SBC). In this case, mutual authentication is required. For this to work, Lync and SBC need a certificates. We did not use a public CA, so we used self-signed certs. Each device generates a CSR, which gets signed by a CA. Then each device loads the cert, and the root/domain cert. During TLS negotiation, they exchange certificates, and because they are both signed by the same root, they are now authenticated. 

Here are some of the funnies I have come across:

  • There are different types of certificates. From Thawte, you can buy SS123, WebServer, or code signing certs.
  • Different formats of certificate files: .pem, .csr, p7b, pfx. Some are text based, and some are binary.
  • Certificates have different usages: Server, Client, signing, etc
  • you can use openssl to debug SSL issues. openSSL can act as a server or client, and can be used to negotiate and debug SSL
  • wireshark can be used to debug SSL issues. Capture some traffic, and use Decode As if it does not automatically see the SSL (if different ports are used)

One common misunderstanding I had was that to enable authentication between two parties, they both needed the same certificate. One needs a certificate, and the other needs the associated root/private certificate

I found these links very usefull:

Monday, June 10, 2013

Dell XPS 13'' - The unboxing

I've always wanted to do a product unboxing.....but I wont. Besides it not been very fun, this is not a new laptop.
I will be using a loan laptop for the next few weeks. Normally, you'd expect that a loan laptop is an old piece o crap thats been used by a few former employees, and is slow and heavy. Not this one. I got a 1 month old Dell XPS 13 inch Ultrabook

I dont think you can see how thin it is. Its very thin and very light, all in an aluminum shell. Its so thin, it does not have an Ethernet or Serial port. Those are available via a USB dongle.
With a Core i7 and SSD, its very fast as well. This one has Windows 7 (Windows 8 is an option). I installed Ubuntu 13.04 via Virtualbox, with some decent performance.

All in all, a very fast, small and light laptop. After a few days, the 13 inch screen does not even feel that small.

I think I'm ready for a Macbook 15" now.

Tuesday, May 14, 2013

Back home again

In a previous post, I complained about the laptop I've been using for the last 3 years. a Lenovo R400, Core 2 Duo, 3GB RAM that ran extremely slow on Windows 7 . Excel and Visio 2010 was enough to make this thing hang.

I am finally due for an upgrade, and we were offered to buy our current for R1200. All of a sudden, this slow laptop turned into a goo deal. One day later, I am happy to say this machine is flying.

I downloaded Ubuntu 13.04, but did not have a USB or CD to burn the ISO onto. Using DriveDroid on my Galaxy Tab 10.1, I could boot into the ISO and install Ubuntu.

I have not been on Ubuntu for 3 years....I dont recognise this window manager!

Monday, May 13, 2013

People Engagement - smoke and mirrors

We had an interesting People Engagement Focus group the other day. HR runs an annual Human Capital Assessment Survey that is web-based. Its quite boring, and and less than 50% of employees take the time to fill it out. This time, they did something different. About 10 of us were invited to a meeting with a Senior Executive (now called Managing Directors) and HR. They wanted to find out if we had any burning issues that needs addressing.

It was a 3 step process:

  1. We were given four sticky notes. On each, we had to write any issue that we wanted to bring to HR's attention. One issue per sticky note, all anonymous. These were then stuck onto the glass wall. Issues that commonly came up were: no work/life balance, lack of training, etc. HR then went through all of them, and arranged them in lines, grouping similar related issues. E.g. all of the training related issues were in one line, while all the work/life balance in another line.
  2. We were then given 4 color dots. These were to be used as voting coins. Each person got up and went up to the glass wall, and put a sticky dot on any issue that they wanted to vote up. You could use all 4 one one issue, or distribute in any way you wanted to. HR then went and counted the votes (sticky dots) per item group. E.g. training issues might have had 2 dots, while work/life balance had 20 dots. These were tallied up in Excel, and sorted by descending amount of votes. So the top voted issues were listed at the top.
  3. Now we were given one more sticky page, and this time we had to write down a solution to a single issue. To quote "if you want to me fix one thing by the end of the financial year, tell me what I need to do to fix it"
The Exec then replied to each of the 10 sticky notes, and we spoke about it in the group session.

As I drove him, I appreciated the technique that was used. They could have asked us to write down all the things that is wrong with the company, and then trying to address each of it in the group session - obviously you cant reach any substance this way - it will just turn into a shouting match. Rather, their technique focussed on narrowing down the issues to just four. Then by use voting on each, we would then highlight the most important. By doing this in a group session, the group would then be able to speak as one, as it were.
Then by sorting it by vote, we then just had a few issues listed on the screen, with the most important ones at the top. This even focusses it and funnels it more narrowly, ensuring that at the end, there are only a handfull of unique issues. This then becomes easier to action - HR go home feeling like champions, and we go home thinking they really care.

But I went home feeling....cheated. Like the math trick game, the outcome felt clearly pre-determined. Its like they knew what the issues were, and by letting us say it, we felt like it they understood us. It felt like it was all cleverly scripted, and we were all just acting it out.

I'm gonna use this on my kids!

Hiring 2.1 - The Agile Way

I've been through my fair share of job hunting. Perhaps too often. But I know enough of the job finding and hiring process to know that it is broken, and desperately needs fixing. This post is a pet peeve, that's been brewing for a number of years. My biggest gripe is that it takes far too long, for both sides:
For the potential employee, it takes a minimum of two months.
For the employer, its at least double that.

The basics of the hiring process is as follows:

  1. A department in Company A realises it is taking strain - the workload is too much for the current amount of staff. Most of the time it is purely reactive like when there has been some turnover of current staff, or possibly sometimes there is demand planning done at the start of the financial year, or when a new strategy or operating model is released.
  2. Department looks at its budget, and realises it can hire x amount of people, at y (junior, middle management, senior) different levels. They agree this with HR.
  3. The hiring manager writes up the specs of the positions. Most of the time he copies it from the spec that currently exist (maybe from the spec that was published when they hired him 10 years), and modifies it a bit. It contains heavy company lingo, with some industry nomenclature. If he has the time, he might Google a bit to see what everybody is using to hire.
  4. The job spec gets sent to HR. Some review takes place, approvals, and sign-offs.
  5. HR release the spec on their internal job board, and maybe concurrently on their external job portal, and to any recruitment agencies that they normally use (about five). A closing date is set about three weeks in the future
  6. After the closing date, there are about two hundred applications. HR easily dispose of most of them, either by guessing, key work scanning, and just plain luck.
  7. About ten CV's/applications are sent to the hiring manager. He selects about five to interview. HR contacts them to set up interviews, or asks the recruitment agencies to contact the applications. Since the recruitment agency is a just a delayed man-in-the-middle, this will take atleast a week.
  8. Depending on Companies A recruitment policies, there will be about two to four rounds of interviews, with these stakeholders: HR, hiring manager, co-workers, senior manager and finally with an executive/partner. It will consist of standard HR questions (tell me about your greatest weakness), technical questions (tell me what happens when you click search on google), psychometric tests, brain teasers (why is a manhole cover round) or just plain ludicrous (hands candidate a rubik's cube, holds a timer, and says "GO!")
  • Each round of interview needs to be scheduled (1 week)
  • Conduct all interviews with the five candidates (2 weeks)
  • Review each candidate with go/no-go (2 weeks)
  • Rinse and repeat for each round

  1. After the final round, you have a one or two candidates left. You make an offer to the best one.
  2. Only now does the candidate get to see the financials. If he accepts, he starts to serve his notice period. If he rejects, you hopefully have the other candidate waiting.
  3. Notice/resignation period can be anything from 30 days to a calender month (which could last almost two months)

In South Africa, a large consulting firm (which globally hires 60K people annually) realised that it needs to double its workforce by 2015. Thats even taking into account doing work offshore  near shore and bringing in cheaper labour (thats the Indians!). An expert recruiter told them that the SA workforce simply cannot support and deliver that amount of candidates. Considering that other companies are also increasing the workforce (except for Cell C still handing out letters), the demand is not sustainable. Add BEE, ridiculous recruitment times, un-managed resignations - and you can see that the situation is even worse. 

Personally, I've found jobs via both recruitment agencies and direct applications. In 2005 I responded to a job post in The Star newspaper via fax (old school), twice via recruitment agencies, and recently via LinkedIn.  My recent experience lasted 7 months: I went for my first interview in November, and I start the new job in June.  Most of the times I had received multiple offers, so I had the luxury of choosing the best. The longer the recruiting process, the bigger the potential to lose good talent to the competition. Thats why its even more important for companies to implement a more agile way of recruitment, to ensure that they can effectively compete in the job market. In some cases, if I was lucky to get my timing correctly, at the end of the process I had two or more offers to choose from. And in some cases, I had to accept an offer from one company, while the other companies offer came a week later, and I had to politely let them down.

We need more that just LinkedIn Recruit or Stackoverflow Careers. We need more that just better tools - what we really need is a culture and mindset change. Companies cannot simply rely on their popularity and size in order to attract talent - they need to actively seek it out. They need to understand that talent is a scarce resource, and they are operating in a competitive environment. For every person that is invited for an interview at Company A, he is probably also going through the recruitment process at two or three other companies. Considering that SA is an emerging market, the demand in IT, (specifically in the Communications and Technology industries) is more than the supply pool can handle. Companies need to actively source talent in a fast and efficient way. But perhaps the biggest thing that holds them back, is the risk. The risk of making the wrong decision and hiring the wrong guy.  But even now, with the three to seven rounds of interviews that candidates have to go through, the wrong guys still get hired. And its not about hiring a guy with the wrong skill set t. Its easy enough to weed out the posers. What is harder is to find people with the same cultural fit, to make sure you don't hire a manage-by-excel guy. And with South African labour law, its almost impossible to fire someone. So I get it: its difficult to find the right people. But surely the risk of being understaffed is greater, and every extra day that you take looking for good talent, you are losing them to other companies.

The long recruitment timeline needs to be drastically cut. I am suggesting a faster way of running the interviews: at the moment, candidates go back and forth for each round of interview. They attend interviews during their lunch breaks, or they might even take a half-day leave. Assuming that each company has three rounds of interviews, and that the candidate is being interviewed by at least three companies, thats atleast nine rounds of interviews that he will attend. From each companies perspective - if they start off with five candidates for round 1, three for round two, and 1 for the last round, thats 9 interviews that HR needs to schedule. Considering that you need open slots in the each stakeholders diary, and in each candidates diary, you can easily see why this takes at least a few months.

What I am suggesting is a bit radical: we schedule ALL rounds with ALL candidates in a single day, or maybe even over a few days, but definitely within a week. At the end, you have a final list that you can make an offer to. Lets go through it:

Consider Department B in Company A is hiring for a single position. After HR sifts through the initial applications, the hiring manager invites five people for interviews. (Important point: at this stage, the people that are invited to attend interviews are already potential candidates. That means you've already weeded out the obvious no-gos, and that the selected ones possess the required skills, experience, etc on their CVs. The interviews now are really to confirm what is on their CVs, and to ascertain the culture fit). The candidates are advised that they take leave for a whole day. This should be individual one-on-one interviews with each candidate, but it could also include grouping candidates together to make things go faster. HR then books the following on the scheduled day:

  • Half hour each interviews for all five candidates -  this will go from 8:00AM to 11:00AM. HR and the hiring manager sit in each interview, with 5 minutes in-between to review. There are only 3 choices to limit decision fatigue: Go, No-Go, Discuss.
  • 11:00AM to 11:30AM - go through the list to see who needs to be invited to the next round
  • 12:00PM to 3:00PM - the shortlisted get called back for the 2nd round individually, this time with the hiring manager, and possible team-members and/or a senior manager. This can be an hour long if required. Same 3 choices: Go, No-Go, Discuss
  • 3:00PM to 3:30PM - go through the list to whittle it down to the last two.
  • 3:30PM to 4:30PM - Hiring Manager, HR and an Executive/partner interview the last two, for half hour each.
  • 4:30PM to 5:00PM - decide to make an offer to one candidate. The other is kept as backup, incase the first one rejects.
The above can perhaps be done over two days, if the list of initial candidates is large, perhaps with different groups on different days. The benefits of this technique is:
  • Its easier to book diaries. Only the hiring manager and HR are dedicated for the day, the rest come in when required.
  • The candidates don't need to drive back and forth for the different rounds. And even if they attended two other companies interviews, they would only need three days of leave.
  • Most importantly, it cuts down on interview stress for the candidate. They can leave after the day with certainty, knowing that they made it through or not. Believe me, the biggest stress comes from playing the waiting game. A few times a company has got back to invite me for the next round after a month (including Google). The waiting severely limits your choices - you might accept an offer, not knowing if  another company might get back to you or not.
  • This way, the hiring manager can go home, knowing he has his man, in just ONE DAY. The confirming of the financials and acceptance will still take some time, but at least the final candidate knows what his options are.

Now this potentially poses a challenge to current recruitment agencies. But they can use this to their advantage  They can handle all of the scheduling of interviews with the candidates. They might even use their premises, which could be ideally set up to cater for enough seating and activities to keep candidates busy for the day. They could even make it into a recruitment fair: by inviting a few companies to interview at the same time. The HR and hiring managers of each company could attend, choose which candidates they want to interview, and then make their decisions. Its fair game!

I agree that this alone is not enough to change current conditions. We need better tools as well. Most companies use Taleo as their internal job board, which is ugly and difficult to use. Candidates have to register their profiles on each companies site, and setup automatic notices or remember to check back often to see if there is an opening. LinkedIn has only partially solved this. Recruitment agencies seem to rely purely on online tools such as Careerweb and CareerJunction, which leaves much to be desired.

We need the Agile for Hiring!

Thursday, May 09, 2013

I'm thinking of going Mac

I'm about to change laptops. Of the ones that I have seen around, nothing grabs me, or excites me. Most of the laptops I have seen lately in meeting rooms include the usual suspects: Dell, Acer, Lenovo. They all look boring - the same thing I've been using for the last few years. Thats nothing that inspires you, something akin to a new cell phone.

I've briefly considered getting a laptop/tablet combo, something like the Samsung Ativ. But I am not sure if I will quickly get over the novelty of the touch screen. Then the only thing I will be stuck with is Windows 8.

So I am really thinking about getting either a Macbook Pro, or Macbook Air. The new Macbook Pro's come with Retina display, but the Airs had the SSD. The other possible factors are the lack of DVD drive on the Air (have not used a CD/DVD this year), and a few other things. What really has driven me towards Mac is that I will have a shell easily accessible. Parellels or some other virtulasation software will allow me to run Windows apps (Visio and Project) and Ubunutu in a VM, without having to dual boot.

I have never used an Apple product before, so thats what makes me uncertain. For that last 3 years, I have been using a Lenovo R400 - a 14.1 inch Dual Core sorry excuse for laptop. Performance is dreadfull - Visio and Excel open together is enough to make the machine hang.

Prior to that, I used a HP 17 inch laptop for about 3 years. I was generally impressed with it. But that was 2009 - I doubt a heavy laptop, with lots of USB ports and a DVD writer is what I am looking for now.

I *think* what I want is something light, with a decent screen (15") and lots of power, and a shell closeby. Ubuntu is not ready for enterprise use full time, so thats why perhaps Mac sounds like a good choice.

Whatddya think?