Recently by David Collier-Brown
Big chunks of the solution to our current "NSA" problems are a solved problem in computer science. Unfortunately, they were solved back in the mainframe era and put on the shelf as too expensive. It's now time to bring some...
The U.S. NSA, our own CSE and various security services are in the midst of a thorough, professional effort to obtain access to everyone's communications, to be able to read them immediately, and to be able to save them away and read them later.
I got dragged into an Agile project a few years back, and expected to hate it. My background is with fixed-price and (semi-)formal-methods projects, so I was wasn't expecting to enjoy the experience. I was pleasantly surprised: the people I...
The Parliament of Canada recently started a public consultation on what changes should be made to Canadian copyright law, after loud public condemnation of a set of proposals a few years ago. Having made more instead of less, because "Using Samba" was available electronically, it behooved me to tell Parliament about my recent experience in the trade-offs in copyright law and in particular to the relevance of digital rights management schemes to publishing.
In this month's IEEE Computer, there's an interesting article about using a Cloud in a non-business critical environment, mixed academic and high-performance computing. In their cloud, a professor can book a set of machines for a particular time each week for a lab, or a student can book a particular configuration of machine to do their homework. Time not booked goes into the general HPC pool, and is used for non-instructional computing. A commercial entity could use the same tactic: allow people to book time from a set of machines, but pre-book the whole of the machine or machines for the more business-critical quarter- and year-end processing.
On first glance, a properly-done cloud computing agreement sounds like it should save a customer company the work of doing any capacity planning at all. You can let the cloud supplier do all the work. However, even the best cloud service is more expensive than running your own small data center, so it doesn't make sense to have everything in the cloud, always. What cloud or utility computing does allow is for you, the customer, to radically simplify capacity and financial planning, and only provide enough resources for the load that you're sure to get, letting the cloud carry all the spikes and year-end rushes.
A colleague asked me why I said to use a ratio of response time to service time of 2:1 in Sizing to Fail. Was it just magic, or was there any science behind it? It turns out to be a range, found by observation, rather like the number of things you can keep in your mind at once: "five, plus or minus two".
Out of your management asks you for a sizing estimate for the program in production, with 1500 users. You've only ever tested with 100 simulated users in JMeter, you don't have a machine big enough to test 1500 users on, and management need the answer by the end of today. Stop. Don't run screaming from the building, however horrible this sounds. You can't tell management what will work, but you can tell them how large a system they'll need to avoid guaranteed failure, which may suffice.