Readers of this blog will know I have a huge axe to grind about bad HR, particularly here in Israel (here, here, here and here). Bad interviews, bad interview questions, not bothering to filter candidates properly. As my company grows and I start to look for new people, I strive to find better techiques for interviewing and hiring, that I have mentioned as potential solutions in my earlier posts, to getting that perfect match. I have just found Skills, a company that may go a long way to ensuring that the right CVs land on my desk. I look forward to using them the next time I am hiring.
I got to them from browsing the jobs on another site that I am excited about Gigantt. Not that I am looking for a job, when I seen new companies I always like to look at what kind of people they are hiring. It gives me a wider view of what they do and how they go about doing it. Gigantt looks like an interesting and very userfriendly project planning tool and maybe just what I need right now. Waiting to hear when I can start beta testing.
CTOIsrael
Experiences and Lessons learned from my position as CTO in a small tech company in Israel. Follow me on twitter @ctoisrael. Comment if you want help with something that I have written about.
Sunday, February 5, 2012
Monday, January 16, 2012
My new super duper server
For the hardware geeks among you lets start with the specs.
Dell R810 platform
4x Intel Xeon E7-8837 2.67GHz 8 core processor
128GB RAM
6x300GB 10k RPM SAS drives (RAID5 using 1 as a hot spare)
This server is for running simulations. It is not serving up webpages, or in fact for any sort of internet application. I needed plenty of parallel power and this is what would fit the budget and requirements best.
So what did I do with it?
Before I start, one of the reasons I am writing this blog is to get input. If while reading this you see something that looks wrong, or a mistake, or that there is a better way to do it then feel free to drop me a line. Let me know where I have gone wrong, and feel free to suggest improvements.
This beast chews a large amount of electricity and makes a lot of noise. It also pumps out a huge amount of heat. My office is small so we elected to host it at a server farm near by and let them worry about all that. When it arrive I went down there and installed it. Spending a day in a server room is not fun if you don't have warm enough clothes, I can also recommend some fingerless gloves.
Annoyingly when I started to install the OS I noticed something wrong with the Harddrive space. I stopped the install and rebooted in the RAID management system. Someone hadn't seen my note about having one hot spare. I had to rebuild the RAID which took a while because the initialization took forever. Having sorted that little problem out I proceeded to install Ubuntu Server 11.10.
The simulation program that I have created is a written in Java and has a GUI front end for configuring the specifics of each individual simulation. That meant some sort of graphical window manager. I installed unity-2d ( I am pretty sure that this was a mistake). Once installed I added GNU Screen, the Oracle version of Java, Eclipse, Firefox, Webmin and FreeNX.
Reasons for my decisions:
Ubuntu
I am most familiar with this distribution, I use it at home and for other systems at work. I feel comfortable with it and under the time constraints felt I needed to chose something that required no time for me to learn. Debian could have worked too as they are pretty similar and both use apt. Given that they are similar I reasoned that there would be no difference either way, so I stuck to Debian. My Other reasonable option was CentOS. Not being familiar with that or with yum, I decided that I did not want to spend the whole day looking up how to install and operating system that I had never used before. What I should have done was done some test installations on VMWare instances on my old server.
Java
The simulations are optimization and testing of a live/ real time system that runs elsewhere, ie not on this particular server. The simulations are far more computer intensive that the live system. The original system that I created depended on a Java API. There were no other options at the time. As a result the software that I have written has remained in Java. As time has gone on we have moved away from the original API and now we primarily use QuickFIX/J. Another Java API. QuickFIX has other APIs but considering how far I had come with the software already written in Java it seemed like it would take too much effort on my part to rework everything into another language like C++. I picked the Oracle JRE as I have had some trouble in the past with the ICEDTEA version.
Unity-2d
Unity itself is a controversial window manager, Ubuntu created it as an alternative to Gnome3. As a change from Gnome2 the previous default window manager its pretty drastic, and there are many people left feeling that the choices that they used to have with Linux are disappearing in a world of constrained window managers. To be honest I do not agree, Ubuntu is aimed at the entire desktop market, their goal is to create an easy to use Linux desktop (or laptop) that you don't have to be a Linux guru to understand. There are options if you want them chose something else. I digress, I chose Unity-2d as there is only a standard graphics card in the server and I really don't need the full version. I believe this was a mistake. I should have opted for something even more lightweight and simple. I am not sure whether this will cause me any problems but I could certainly have used something else. I did however have to chose something. As I said the application that I will be using is controlled by a GUI written in Java, I therefore must have X and a window manager.
My major annoyance with unity is that when I have many windows open I just want a regular taskbar. The unity panel is annoying and can really slow me down. Simple solution to this was using gnome-panel. I now have a taskbar at the bottom which makes life much easier switching between running applications. There was another options tint2 but I prefered the more simple UI of gnome-panel.
For those of you wanting to know more about configuring unity in general have a look here. And more on using unity's features here.
Everything is working fine now as far as the window manager is concerned. I still have my doubts though, it is possible that unity is slowing things down a little. Possibly there is no escape while I need a GUI I must use something and this might be just as good or bad as any.
Eclipse
I use eclipse for my Java coding, so it is useful to have it available should I need to do something directly on the server. I don't envisage needing it very often at all but better to have it available just incase.
Webmin
I love Webmin, it makes many sysadmin tasks very easy. Additionally I have made it available over the internet so that I can view it from my office. I especially like if for configuring the firewall. Iptables can be a bit fiddly and Webmin makes life much easier.
FreeNX
I am using my FreeNX a free port of NoMachine NX server for my remote desktop session. The advantage of this compared to exporting the display is that it is easy to keep the current session alive, like most remote desktop software. I have had trouble using VNC on Linux in the past and I had heard about FreeNX before. Having searched around the internet I found that this was recommended. An additional bonus is that it opens a separate session to that which is viewable directly on the server. Instructions for Ubuntu installation can be found here. I have not been able to set it up avaiable over the web on a browser but there are instructions here. So far so good, configuration was easy and the NoMachine NX client for windows connects, disconnects and reconnects to existing sessions with no problems. The interface is nice and fast over my Internet connection and so far I am happy with the results. I can use my software in a nice full screen GUI over the Internet as if I was sat in front of the server myself.
Thats my setup. I will follow this post up with any gripes and problems that I have with my system as I begin to put it through its paces. Let me know if you have any questions about what I did what I installed and if you have problems with any of these bits of software let me know, maybe I can help.
Dell R810 platform
4x Intel Xeon E7-8837 2.67GHz 8 core processor
128GB RAM
6x300GB 10k RPM SAS drives (RAID5 using 1 as a hot spare)
This server is for running simulations. It is not serving up webpages, or in fact for any sort of internet application. I needed plenty of parallel power and this is what would fit the budget and requirements best.
So what did I do with it?
Before I start, one of the reasons I am writing this blog is to get input. If while reading this you see something that looks wrong, or a mistake, or that there is a better way to do it then feel free to drop me a line. Let me know where I have gone wrong, and feel free to suggest improvements.
This beast chews a large amount of electricity and makes a lot of noise. It also pumps out a huge amount of heat. My office is small so we elected to host it at a server farm near by and let them worry about all that. When it arrive I went down there and installed it. Spending a day in a server room is not fun if you don't have warm enough clothes, I can also recommend some fingerless gloves.
Annoyingly when I started to install the OS I noticed something wrong with the Harddrive space. I stopped the install and rebooted in the RAID management system. Someone hadn't seen my note about having one hot spare. I had to rebuild the RAID which took a while because the initialization took forever. Having sorted that little problem out I proceeded to install Ubuntu Server 11.10.
The simulation program that I have created is a written in Java and has a GUI front end for configuring the specifics of each individual simulation. That meant some sort of graphical window manager. I installed unity-2d ( I am pretty sure that this was a mistake). Once installed I added GNU Screen, the Oracle version of Java, Eclipse, Firefox, Webmin and FreeNX.
Reasons for my decisions:
Ubuntu
I am most familiar with this distribution, I use it at home and for other systems at work. I feel comfortable with it and under the time constraints felt I needed to chose something that required no time for me to learn. Debian could have worked too as they are pretty similar and both use apt. Given that they are similar I reasoned that there would be no difference either way, so I stuck to Debian. My Other reasonable option was CentOS. Not being familiar with that or with yum, I decided that I did not want to spend the whole day looking up how to install and operating system that I had never used before. What I should have done was done some test installations on VMWare instances on my old server.
Java
The simulations are optimization and testing of a live/ real time system that runs elsewhere, ie not on this particular server. The simulations are far more computer intensive that the live system. The original system that I created depended on a Java API. There were no other options at the time. As a result the software that I have written has remained in Java. As time has gone on we have moved away from the original API and now we primarily use QuickFIX/J. Another Java API. QuickFIX has other APIs but considering how far I had come with the software already written in Java it seemed like it would take too much effort on my part to rework everything into another language like C++. I picked the Oracle JRE as I have had some trouble in the past with the ICEDTEA version.
Unity-2d
Unity itself is a controversial window manager, Ubuntu created it as an alternative to Gnome3. As a change from Gnome2 the previous default window manager its pretty drastic, and there are many people left feeling that the choices that they used to have with Linux are disappearing in a world of constrained window managers. To be honest I do not agree, Ubuntu is aimed at the entire desktop market, their goal is to create an easy to use Linux desktop (or laptop) that you don't have to be a Linux guru to understand. There are options if you want them chose something else. I digress, I chose Unity-2d as there is only a standard graphics card in the server and I really don't need the full version. I believe this was a mistake. I should have opted for something even more lightweight and simple. I am not sure whether this will cause me any problems but I could certainly have used something else. I did however have to chose something. As I said the application that I will be using is controlled by a GUI written in Java, I therefore must have X and a window manager.
My major annoyance with unity is that when I have many windows open I just want a regular taskbar. The unity panel is annoying and can really slow me down. Simple solution to this was using gnome-panel. I now have a taskbar at the bottom which makes life much easier switching between running applications. There was another options tint2 but I prefered the more simple UI of gnome-panel.
For those of you wanting to know more about configuring unity in general have a look here. And more on using unity's features here.
Everything is working fine now as far as the window manager is concerned. I still have my doubts though, it is possible that unity is slowing things down a little. Possibly there is no escape while I need a GUI I must use something and this might be just as good or bad as any.
Eclipse
I use eclipse for my Java coding, so it is useful to have it available should I need to do something directly on the server. I don't envisage needing it very often at all but better to have it available just incase.
Webmin
I love Webmin, it makes many sysadmin tasks very easy. Additionally I have made it available over the internet so that I can view it from my office. I especially like if for configuring the firewall. Iptables can be a bit fiddly and Webmin makes life much easier.
FreeNX
I am using my FreeNX a free port of NoMachine NX server for my remote desktop session. The advantage of this compared to exporting the display is that it is easy to keep the current session alive, like most remote desktop software. I have had trouble using VNC on Linux in the past and I had heard about FreeNX before. Having searched around the internet I found that this was recommended. An additional bonus is that it opens a separate session to that which is viewable directly on the server. Instructions for Ubuntu installation can be found here. I have not been able to set it up avaiable over the web on a browser but there are instructions here. So far so good, configuration was easy and the NoMachine NX client for windows connects, disconnects and reconnects to existing sessions with no problems. The interface is nice and fast over my Internet connection and so far I am happy with the results. I can use my software in a nice full screen GUI over the Internet as if I was sat in front of the server myself.
Thats my setup. I will follow this post up with any gripes and problems that I have with my system as I begin to put it through its paces. Let me know if you have any questions about what I did what I installed and if you have problems with any of these bits of software let me know, maybe I can help.
Wednesday, January 4, 2012
Interview questions - some ideas and why they are helpful
I have been away on vacation and I got some time to read, so I dug in to Black Swan by Nassim Nicholas Taleb. He is an excellent and thought provoking writer. I am reading this book extra slowly as I like to stop after every page or so and think about what he has written.
In the book he poses some interesting questions and I think some of them may be interesting to use as interview questions. So that I don't leave you in suspense I will put the questions here and allow you to think about them while I give an further introduction to what I want to achieve, which is not necessarily the point that Taleb was making.
A) If you had no restrictions on folding a piece of paper and could physically fold it 50 times. How tall approximately would the folded paper be?
B) Given the number sequence 2,4,6. Find the rule that determines the sequence. You give three three number sequences that I will answer yes or no to if then conform to the rule of the sequence.
C) If I flipped a fair coin 100 times and it came up with 99 heads, what is the probability that the next flip will be tails?
I have talked before about stupid interviews that waste my time as well as the time of the employees of the company that has been interviewing me. I should note that I have been asked interesting questions and puzzles in some interviews and one or two silly ones too. The one that springs to mind here is when I got asked if I could have any superpower what would it be. If you are an interviewer asking that question be damn sure you have a good interpretation of the responses to this question. Some people, even geeks, are just not in to comics and superheroes. My answer to that question was that it is very hard to give an answer that couldn't be sinister. Flying is probably pretty harmless, but mind reading/ control, invisibility, super strength are all things that could be used for bad just as easily as for good. I didn't think it was a good question, but I tried to answer it as best as I answer good questions, by showing that it was flawed. I then asked the interviewer the same question. He gave me an unsatisfactory answer. I should have asked him how he interpreted my answer, I don't think it made any difference to the result of the interview. I will concede here that if the future employer is a huge comic book fan as are the team that the position is for, then it may be a useful tool for assessing how good a fit it will be for the future employee.
Additionally mathematical and logic puzzles may not give the best metrics for good employees when used incorrectly. Know what you want to get out of the answers, it is not good enough to find a question on the internet and ask it hoping to determine that the answer will give and indication of whether the potential employee is capable of thinking outside the box or has above average logical reasoning skills.
Looking at the questions in a little more detail:
A) Firstly the answer I don't want to hear is that a piece of paper no matter what the size cannot be folded more than 7 times (This is the current accepted number for a regular piece of paper, though apparently a girl managed 12 times and came up with and equation to prove it). I am not testing knowledge, and no one likes a smart ass. Additionally I asked the question stating there were no physical restrictions.
I want to see the reasoning and thought process. First state your assumptions. A piece of regular printer paper is clearly not 1cm thick. I would accept the candidate making the assumption 1mm, though that is a factor of 10 out. It is much closer to 0.1mm, however I am not testing knowledge here just reasoning. Lets assume it is 1mm we can always divide by 10 to get the real answer later. It should be fairly clear that the answer is 50^2 * thickness. I do not expect the exact answer here. But I would like an order of scale. I have a short cut. 10^2 is 1024. So for every ten folds the height of the paper goes up by a factor of 1000, this means that the answer is 10^15 times the thickness. If the thickness is 1mm then the length is approximately 1,000,000,000 km. The real answer given the actual thickness is 100,000,000km, about two thirds the way from hear to the sun.
I have a confession, I am horrible at mental arithmetic. I could train myself to be better at it, but I guess I just got lazy. If I need to add something up accurately I usually have a computer, calculator or even a pen and paper handy. What I am good at is estimating. This is phenomenally powerful. The ability to this well and accurately enough can be more useful than just plugging the numbers into a calculator. If I look at some calculation and estimate the answer first, it is easy to see errors if I make a mistake when plugging the actual numbers into a computation device. I attribute this lesson to Mr Roger Hand my A-level physics teacher, thank you so much for this it has been an invaluable tool throughout my life.
I like to know the scale of big numbers. It puts things in perspective, it is not the most important thing here but you can expand on the estimation aspect by equating a large number with something that it represents. 100 million km really doesn't mean that much to me. The distance to the moon is 400 thousand km, so it is considerably more than that and as I said two thirds the distance to the sun. If someone said its about the distance to the sun, then that would be satisfactory. It's about the right order of magnitude.
Enough about estimation. The other skill this highlights is knowledge and understanding of powers of two. I honestly wouldn't care if the candidate had to write down the powers of two, as long as they stopped at 10. It should be obvious at that point how to short cut to the answer from there.
I will highlight why these two things are necessary for a candidate that I would employ. We deal with large numbers. Millions and tens of millions of dollars. We multiply that by fractions of pennies. If we are producing results that are factors of 10 out is should be obvious from an estimate done before that calculation. This is pretty critical as we are talking about real money transactions and position management. The powers of two should be second nature to any Computer Science graduate. We recently had to send out messages to a system in byte codes. I spent far too much of my time explaining the binary, decimal and Hex representations of these strings to my employee. It was immensely frustrating for both of us, but we got through it. In a world of high level languages, it is too easy to forget the roots of computing lies in bits and bytes.
B) What three sequences did you think of? 1,3,5; 8,10,12; 20,22,24. The answers would be yes, yes and yes. Your conclusion would be increments of 2. You would be wrong. That answer is clearly a possibility but not what I was looking for. You may argue that it fits your sequences. Of course so does the correct answer. You may get defensive, and say how was I supposed to determine that. To which I would answer that you must try to find a sequence that doesn't fit. Using the following 3 sequences: 1,3,5; 5,10,15; 6,5,3, would result in yes, yes and no. 1,3,5 confirms your initial assumption that the number increment by 2. 5,12,15 disproves that the increments must be 2 and that they must be regular. Finally 6,5,3 shows that the sequence cannot be descending. The answer is that the sequence must be ascending, and that is all.
It can only be shown by finding counter examples as well as examples. If you just go about proving yourself right with each suggestion you will get locked in a corner. A candidate able to simply solve this should be good a bug testing. A successful bug test is one that fails, any other bug test does not show evidence of a bug. This must not be confused with show evidence of no bugs. A semantic difference for sure, but a very important one.
C) This takes me back to school. The concept of a fair coin and a fair die. That means that it is not unfairly weighted, there is no bias. It should conform to probability. Of course if the question was I flipped a fair coin 5 times and it was heads. The probability of a tail would reasonably still be 50%. I would not expect anything else. However as the numbers grow the statistics should tend towards conforming to the true probabilities. I would reasonably expect around 60/40 one way or another for 100 tosses and maybe 55/45 for 1000 flips. To have 99 out of 99 flips come up the same, you would be a little foolish to assume that I was telling the truth about it being a fair coin. So OK, its a trick question of sorts, but it is more of a real world problem. I have been told something, but I have evidence to prove otherwise, now I must draw a reasonable conclusion. I am not trying to trick you, I am trying to get you to solve a problem of two conflicting bits of information. I want to here that it should be 50/50 if its a fair coin, but it doesn't seem to be a fair coin. Equally I do not want to hear that it's clearly going to be a 100th head. You would have ignored my statement that its a fair coin, and assumed that just because something has happened one way 99 times in a row then it is going to be the same on the 100th time.
Here we see that that real world problems do not always fit into classroom style examples. The world fits the normal distribution less than we think it does. It also gives an example of how probability and statistics fit together. A reasonable candidate should understand and show this in their discussion of the question.
I hope this has given and potential interviewers some ideas on what to ask and how to understand the answers they get. Please comment and let me know what you think about this, and if you have any other suggestions.
Labels:
black swan,
bug testing,
estimation,
hiring,
HR,
interview questions,
interviews,
probability,
Taleb
Wednesday, August 10, 2011
More on Passwords
Following my last rant about passwords. Today's XKCD is right on the money. However it requires sysadmins to change their silly requirements about having between 6 and 8 letters, with one capitalization and one numeral. It is clear that the longer a password is the harder it is to guess by a computer. However if you pick words from around your office (like they do in the movies) you could be susceptible to a really good human guess.
Labels:
passwords,
security,
sysadmin,
system administration
Tuesday, July 19, 2011
Advantages of pair programming and or good code review
I had a little problem with some code that went live an through a NullPointerException, causing various problems in the production system. It was far from a disaster but not great that such a thing should happen. Additionally The error was hard to track down to its source. When I did track it down I found the problem described in this post. As soon as I saw it I knew that was contributing not only to the issue but causing additional problems.
The key here was that in testing which I reviewed with the author of this code everything was OK. I would like to have unit tests for everything testing every possible issue. But we don't. Its not always practical, its not always easy to know the issues ahead of time. We should, I should insist on it. Would projects that are running late run later, most definitely. Would we have less problems once they went live, almost certainly. I just don't have the man power right now to get everything done, and remember I am not the boss. My superiors are all non technical. They want things done yesterday. They don't understand why it isn't ready just that it isn't. As long as I can rule out critical bugs it works in my favour to get things rolled out and running, and solve the minor issues as the come up. Its inefficient and its bad practice. Solution, if you are in the same position as me then get a bigger budget and hire and extra programmer or two.
With or without the extra man power there are two possible methods that can be used to avoid at least some of the issues that come up when going live with new code. Pair programming and code review. At my previous place of employment we used both to ensure as few mistakes as possible. Pair programming is an interesting method, because it requires one computer and two people. One is in the driving seat and the other sits next to them and contributes. Anyone familiar with Agile or Extreme Programming will be familiar with pair programming. When it works its great. The time lost by having to programmers work on one piece of code is gained back by having higher quality of code, that is easier to maintain, and with less bugs. Even things like the write / compile / test cycle is improved by having less syntax errors (due to them being spotted by the one not doing the driving). Ideas are generated and turned over faster. When it comes to maintenance or fixing the bugs that do get through having two people that worked on the code originally can be very helpful. If one is off on vacation or sick, or worse has left the company, the other is likely to be around. This extra ownership of the code is so helpful in these situations.
An old colleague of mine envisioned people rotating the pairs reasonably frequently, lets say one person worked with one pair in the morning and another in the afternoon. The more people you have the more possible pairs. This could mean even more that two people have some ownership of the code. Switching pairs around will give people the opportunity to take a break from each other, which is often needed in some cases, and to find really constructive and destructive pairings, to be either encouraged or discourage in the future. An additional benefit of pair programming is the lack of other distractions, you cannot be sitting in front of some code when working with someone and then suddenly check your facebook/gmail/twitter. You will certainly need more breaks, but you wont spend real coding time wasting it on the internet.
Where it breaks down is when you do not have many people it can still be done but without the variance of a large group. The number of possible pairings is n(n-1)/2 (triangular numbers), so in smaller groups this is limited. Additionally not all possible pairs work. I was often paired with weaker coders which meant I was doing most of the thinking. It was also frustrating for me not driving because the driver was working so slowly. In some projects I was working with one other programmer and we would spend some time working on the same code and other time working separately. The other coder would frequently get stuck and ask to do some work in a pair. This was code for either me working and explaining what I was doing to his code as I went along, or sitting behind him growing ever more frustrated as I told him what to do. That was a particularly nightmare case. Other pairs that I worked with were great. We thought alike, solved problems quickly together and saved time exactly in the ways mentioned above.
Code review is a lot more annoying but is a great final barrier before moving to production. I would often work with another coder on some code, then later go to someone more senior than me to be my pair for the installation. This meant that I had to explain all changes to the senior person, who would look over the code for any glaring errors, and understand the changes that I had made. Installation was made safer by having someone look over my shoulder making sure that I didn't do something stupid in production. The reason its annoying to do is again because it means taking the time of someone senior to go over something you know well again. If you want to get something in to production quickly it can be really annoying having someone asking you to justify every line of code. However in the long run this is the safest way to do things.
If you haven't tried pair programming, try it, even for a couple of hours a day.
Happy coding.
The key here was that in testing which I reviewed with the author of this code everything was OK. I would like to have unit tests for everything testing every possible issue. But we don't. Its not always practical, its not always easy to know the issues ahead of time. We should, I should insist on it. Would projects that are running late run later, most definitely. Would we have less problems once they went live, almost certainly. I just don't have the man power right now to get everything done, and remember I am not the boss. My superiors are all non technical. They want things done yesterday. They don't understand why it isn't ready just that it isn't. As long as I can rule out critical bugs it works in my favour to get things rolled out and running, and solve the minor issues as the come up. Its inefficient and its bad practice. Solution, if you are in the same position as me then get a bigger budget and hire and extra programmer or two.
With or without the extra man power there are two possible methods that can be used to avoid at least some of the issues that come up when going live with new code. Pair programming and code review. At my previous place of employment we used both to ensure as few mistakes as possible. Pair programming is an interesting method, because it requires one computer and two people. One is in the driving seat and the other sits next to them and contributes. Anyone familiar with Agile or Extreme Programming will be familiar with pair programming. When it works its great. The time lost by having to programmers work on one piece of code is gained back by having higher quality of code, that is easier to maintain, and with less bugs. Even things like the write / compile / test cycle is improved by having less syntax errors (due to them being spotted by the one not doing the driving). Ideas are generated and turned over faster. When it comes to maintenance or fixing the bugs that do get through having two people that worked on the code originally can be very helpful. If one is off on vacation or sick, or worse has left the company, the other is likely to be around. This extra ownership of the code is so helpful in these situations.
An old colleague of mine envisioned people rotating the pairs reasonably frequently, lets say one person worked with one pair in the morning and another in the afternoon. The more people you have the more possible pairs. This could mean even more that two people have some ownership of the code. Switching pairs around will give people the opportunity to take a break from each other, which is often needed in some cases, and to find really constructive and destructive pairings, to be either encouraged or discourage in the future. An additional benefit of pair programming is the lack of other distractions, you cannot be sitting in front of some code when working with someone and then suddenly check your facebook/gmail/twitter. You will certainly need more breaks, but you wont spend real coding time wasting it on the internet.
Where it breaks down is when you do not have many people it can still be done but without the variance of a large group. The number of possible pairings is n(n-1)/2 (triangular numbers), so in smaller groups this is limited. Additionally not all possible pairs work. I was often paired with weaker coders which meant I was doing most of the thinking. It was also frustrating for me not driving because the driver was working so slowly. In some projects I was working with one other programmer and we would spend some time working on the same code and other time working separately. The other coder would frequently get stuck and ask to do some work in a pair. This was code for either me working and explaining what I was doing to his code as I went along, or sitting behind him growing ever more frustrated as I told him what to do. That was a particularly nightmare case. Other pairs that I worked with were great. We thought alike, solved problems quickly together and saved time exactly in the ways mentioned above.
Code review is a lot more annoying but is a great final barrier before moving to production. I would often work with another coder on some code, then later go to someone more senior than me to be my pair for the installation. This meant that I had to explain all changes to the senior person, who would look over the code for any glaring errors, and understand the changes that I had made. Installation was made safer by having someone look over my shoulder making sure that I didn't do something stupid in production. The reason its annoying to do is again because it means taking the time of someone senior to go over something you know well again. If you want to get something in to production quickly it can be really annoying having someone asking you to justify every line of code. However in the long run this is the safest way to do things.
If you haven't tried pair programming, try it, even for a couple of hours a day.
Happy coding.
Labels:
agile,
code review,
extreme programming,
pair programming,
programming
Monday, July 18, 2011
Passwords and Security (cont)
And just like that here we have it
https://browserid.org/
an attempt by Mozilla at unifying your login. This is exactly whats needed. One login for all sites, cue Lord of the Rings reference.
There are many sites like banking sites that may not work with this straight away, especially as my bank likes me to change my password every 2 minutes. see previous post.
But with initiatives like this I can see things moving in the right direction.
Google goes a long way towards this, once you are signed in to Gmail you are signed in to all Google products. However that is limiting in that you may want to use something other than Google on the web.
Safe surfing...
https://browserid.org/
an attempt by Mozilla at unifying your login. This is exactly whats needed. One login for all sites, cue Lord of the Rings reference.
There are many sites like banking sites that may not work with this straight away, especially as my bank likes me to change my password every 2 minutes. see previous post.
But with initiatives like this I can see things moving in the right direction.
Google goes a long way towards this, once you are signed in to Gmail you are signed in to all Google products. However that is limiting in that you may want to use something other than Google on the web.
Safe surfing...
Labels:
passwords,
security,
sysadmin,
system administration
Saturday, July 9, 2011
Passwords and Security
I have too many passwords, way too many. Its dangerous I am signed up to many websites, often using my email as a username. I am careful though not to use my the same password for my email as I do for the websites where my username is my emails address. http://bit.ly/rqYTu0 XKCD sums this up quite nicely. I am a big fan of using google account /facebook /twitter logins for other sites. This makes perfect sense to me. I only need one strong password for my gmail account and google with authorize my login to other sites. I really hope many sites pick this up, the internet will become a much safer place, though could force some people to get accounts with services they do not want. The other day I almost signed up to facebook as it was the only way to log in to a site. I didn't in the end so I never got to use that site, but doubt many people will be in this situation.
This was not the main purpose of this rant. There are some sites, mainly banking, and also at a previous company where you have to change your passwords every 3 months. I just think this is totally excessive. No one takes security seriously at this point. Every time I have to change my password one of two things happen. I forget my password and I get locked out or I have to write it down on paper and leave it next to my computer. Additionally in order to try to remember it I have to pick something easy to remember. I am not the only one that does this. That being the case the very process used to create more security is actually creating less security. So Sysadmins I beg you, stop this there are better ways to increase security. Insist on very secure passwords that never change or use something like and RSA secureID key. I admit that they are not always practical measures but at the same time they are better than this pseudo secure method of changing passwords every 3 months.
/rantover
This was not the main purpose of this rant. There are some sites, mainly banking, and also at a previous company where you have to change your passwords every 3 months. I just think this is totally excessive. No one takes security seriously at this point. Every time I have to change my password one of two things happen. I forget my password and I get locked out or I have to write it down on paper and leave it next to my computer. Additionally in order to try to remember it I have to pick something easy to remember. I am not the only one that does this. That being the case the very process used to create more security is actually creating less security. So Sysadmins I beg you, stop this there are better ways to increase security. Insist on very secure passwords that never change or use something like and RSA secureID key. I admit that they are not always practical measures but at the same time they are better than this pseudo secure method of changing passwords every 3 months.
/rantover
Labels:
passwords,
security,
sysadmin,
system administration
Friday, July 8, 2011
JVisualVM - Java's hidden monitor and profiling tool
There are a few different ways of profiling your Java application. One is using the -prof switch when calling running a Java app. This will output a profile file when the app is terminated. Another is a third party app like Yourkit. Yourkit is great, but you have to add a something to the commandline and after 30 days full free trial is its super expensive.
Finally in recent editions of the JDK there is jvisualvm.exe. Make sure you have the JDK and not just the JRE. locate the install directory and find the bin directory. There you should find the executable. Once you open it up you can go to tools and add some additional plugins. On the left hand side you will see the available JVMs that are connected. The good thing about this profiler/monitor is that you don't need to add anything to the commandline like you do with Yourkit. In your kit you have to add something to connect to that yourkit agent. Here it just connects and you can see all the JVMs running. Each instance of Java should run in a new JVM so you can run and view many different programs at once. It gives you a basic view of the memory and processor usage along details about calls to GC and the number of threads and classes being used. Additionally you can run a profiler to which will sample the application and give details of either cpu or memory usage. This is extremely useful in finding memory leaks or in my case optimizing your code. I do a large amount of number crunching which tends to use a lot of memory and cpu, using jvisualvm I have reduced the amount of memory used and made the whole process more efficient. There are other features that let you view information about classes.
I came across and a problem when running jvisualvm the other day. The executable ran and but it did not detect my running application. I ran an old version of my application and found that it was detected. In the end I have discovered that for some strange reason the java executable has to be running with code located on the same drive. Its very strange but seems to be the only solution I am aware of. If anyone else has seen this problem let me know, especially if you solved it in another way. As usual drop me line if you have questions that I haven't answered in this post.
Happy optimizing.
Finally in recent editions of the JDK there is jvisualvm.exe. Make sure you have the JDK and not just the JRE. locate the install directory and find the bin directory. There you should find the executable. Once you open it up you can go to tools and add some additional plugins. On the left hand side you will see the available JVMs that are connected. The good thing about this profiler/monitor is that you don't need to add anything to the commandline like you do with Yourkit. In your kit you have to add something to connect to that yourkit agent. Here it just connects and you can see all the JVMs running. Each instance of Java should run in a new JVM so you can run and view many different programs at once. It gives you a basic view of the memory and processor usage along details about calls to GC and the number of threads and classes being used. Additionally you can run a profiler to which will sample the application and give details of either cpu or memory usage. This is extremely useful in finding memory leaks or in my case optimizing your code. I do a large amount of number crunching which tends to use a lot of memory and cpu, using jvisualvm I have reduced the amount of memory used and made the whole process more efficient. There are other features that let you view information about classes.
I came across and a problem when running jvisualvm the other day. The executable ran and but it did not detect my running application. I ran an old version of my application and found that it was detected. In the end I have discovered that for some strange reason the java executable has to be running with code located on the same drive. Its very strange but seems to be the only solution I am aware of. If anyone else has seen this problem let me know, especially if you solved it in another way. As usual drop me line if you have questions that I haven't answered in this post.
Happy optimizing.
Labels:
code optimization,
debugging,
Java,
jvisualvm,
jvm,
monitor,
profiler,
programming
Wednesday, July 6, 2011
Excpetions - if you haven't caught on then you should try to
I am sure this has been said before but I came across an error that occurred yesterday. I was told the code was working. For the most part I could see that in the staging environment it was working just fine. Sunday comes around. My install day. I check the staging environment. I find that there is a null pointer exception. The code crashed at a strange point. With some deduction I realize where the error is coming from. A method was called on an object that was clearly null, hence causing the exception. The object that caused the exception was instantiated on the line before. So no doubt there. I the method called on the line before returns the object in question so I look at the method, which happened to be in a different class. I see:
public SomeObject badMethod(InputObject obj){
try {
<code some of which that could cause an exception..
including declaration of SomeObject
and return of the instantiated object>
}
catch (Exception e) {
return null;
}
}
Two big problems with this, possibly three. The minor of all of these is that so much code is in the try block. I am not sure if this is wrong, but at the same time it doesn't look right mainly from a maintenance point of view. I had to delete the try and catch lines to find out which line of code require the exception handling, thanks to Eclipse that bit was simple.
I found out that the exception was actually a ParseException.
TIP #1: When using try catch blocks to catch exceptions, catch the specific exception. Always be more explicit when you can. Code is more understandable and you have the specific exception object available to you in the catch block.
The really big problem was how this was handled. Theoretically I have no problem with returning null from a method. However if you use a method that could return null that value must be handled. Sometimes it is better if the method throws an exception instead.
TIP #2: If the method could return null you must test for it. Otherwise you will end up with a nasty NullPointerException
In my case I ended up with an unhandled NullPointerException instead of a ParseException. I had no further information about this as well. This brings me to another point, at the very least you want to have the line
e.printStackTrace();
in the catch block. At least then you can track the exception properly. Better still use a logger, and add a little message of your own perhaps with some variable values printed so that you have an idea of what went wrong in your log or on your console. With most loggers you can add the the exception itself as an argument and the logger will handle outputting the relevant information from it.
TIP #3: Use a logger to print a sensible message from the catch block.
In this example the there was no real exception handling in the code. The catch block just returns a null which is not handled in the calling code. My solution in this case was to add a sensible logging line and throw the exception again. This way the exception is passed up the stack. The key here is that it forces the calling method to handle this exception. In this case that works, I needed an extra try catch block, but the error will be handled better. Now the parse exception will be passed up through the calling stack and the method will now handle this error correctly.
TIP #4: Handle the exception, its not enough just to print the stack trace or log the error.
Conclusion:
The tips I have presented here are very important when dealing with exceptions in Java. It is too easy, especially in Eclipse which does so much for you, to leave the printStackTrace() in there and not do anything else. But they must be handled. Learn to use them problem and exceptions will be your friend. Errors will be handled correctly and the code will run more smoothly and hopefully be a little more readable.
Happy coding.
public SomeObject badMethod(InputObject obj){
try {
<code some of which that could cause an exception..
including declaration of SomeObject
and return of the instantiated object>
}
catch (Exception e) {
return null;
}
}
Two big problems with this, possibly three. The minor of all of these is that so much code is in the try block. I am not sure if this is wrong, but at the same time it doesn't look right mainly from a maintenance point of view. I had to delete the try and catch lines to find out which line of code require the exception handling, thanks to Eclipse that bit was simple.
I found out that the exception was actually a ParseException.
TIP #1: When using try catch blocks to catch exceptions, catch the specific exception. Always be more explicit when you can. Code is more understandable and you have the specific exception object available to you in the catch block.
The really big problem was how this was handled. Theoretically I have no problem with returning null from a method. However if you use a method that could return null that value must be handled. Sometimes it is better if the method throws an exception instead.
TIP #2: If the method could return null you must test for it. Otherwise you will end up with a nasty NullPointerException
In my case I ended up with an unhandled NullPointerException instead of a ParseException. I had no further information about this as well. This brings me to another point, at the very least you want to have the line
e.printStackTrace();
in the catch block. At least then you can track the exception properly. Better still use a logger, and add a little message of your own perhaps with some variable values printed so that you have an idea of what went wrong in your log or on your console. With most loggers you can add the the exception itself as an argument and the logger will handle outputting the relevant information from it.
TIP #3: Use a logger to print a sensible message from the catch block.
In this example the there was no real exception handling in the code. The catch block just returns a null which is not handled in the calling code. My solution in this case was to add a sensible logging line and throw the exception again. This way the exception is passed up the stack. The key here is that it forces the calling method to handle this exception. In this case that works, I needed an extra try catch block, but the error will be handled better. Now the parse exception will be passed up through the calling stack and the method will now handle this error correctly.
TIP #4: Handle the exception, its not enough just to print the stack trace or log the error.
Conclusion:
The tips I have presented here are very important when dealing with exceptions in Java. It is too easy, especially in Eclipse which does so much for you, to leave the printStackTrace() in there and not do anything else. But they must be handled. Learn to use them problem and exceptions will be your friend. Errors will be handled correctly and the code will run more smoothly and hopefully be a little more readable.
Happy coding.
Labels:
exception,
exception handling,
Java,
programming,
try and catch
Thursday, June 30, 2011
Problem with reporting of disk space
I ran out of room on a drive. It was reporting that there was 0 space left on the drive. I delete some files and then after running df -h it was still reporting 0 space left on drive. I found this
http://serverfault.com/questions/158524/disk-space-not-freed-on-ext3-raid1-after-deleting
which explains quite nicely the problem.
In short the solution was to type
sudo tune2fs -m 0 /dev/<partitionname>
and all was solved.
http://serverfault.com/questions/158524/disk-space-not-freed-on-ext3-raid1-after-deleting
which explains quite nicely the problem.
In short the solution was to type
sudo tune2fs -m 0 /dev/<partitionname>
and all was solved.
Subscribe to:
Posts (Atom)