I like living in Lafayette, but occasionally its suburban-to-semi-rural nature reminds us that various other species think we’re on their property. Deer eat most anything we plant, gophers re-engineer the yard from time to time, turkeys by the dozen wander by pooping on everything in sight, woodpeckers eat away at the exterior of the house, and don’t get me started on the ants. No big deal.
But when a skunk burrowed under our house last week and exploded at 3:14 AM, that was a bit much.
Skunks are well known for their odor, but in a confined space (say, in the crawlspace directly under our bedroom), it can be truly overpowering. One look outside showed just where the varmint had tunneled in. It was time to call in the professionals.
Critter Control is only local firm I could find willing to trap and remove skunks — so they got the job. (This is not a review of Critter Control — I’ll do that on Yelp when this adventure is over — but so far no complaints.)
They began with a more complete survey of the property, which showed:
- Burrowing under the fence on the West side of our back yard;
- Burrowing under the North side of our house (the burrow I found);
- Burrowing under the deck on the East side of our house; and
- Burrowing under the gate on the East side of our back yard.
So essentially, there’s an East-West Skunk Superhighway through our back yard.
The Critter Control guy was not surprised to find two entrances under the house — “they like to have a second way out in case of intruders” — and he set traps by the deck where the most recent digging had been.
After a couple of days with no activity, one of the traps was knocked over (probably raccoons). When Critter Control came to reset it, there was new digging at the North entrance, so he put one of the traps there too.
The next day we caught a skunk by the deck; Critter Control removed it promptly and reset the traps.
The next day we caught a skunk by the North burrow under the house, and a neighborhood cat by the deck (the traps are quite humane, and the cat was fine).
After another quiet day or two, we caught another skunk by the North burrow. So I asked the Critter Control guy if there was any end in sight to this project.
His answer was typical of Highly Trained Professionals everywhere — it depends.
Apparently, it’s skunk mating season (yes, this is the kind of activity where you start to learn way more than you want to know about a new subject…). If we have a girl skunk who likes our house, the boy skunks can pick out her scent (don’t ask me how, given the ambient level of boy-and-girl-skunk spray) and will keep visiting until we get rid of her.
So along the Skunk Superhighway, a Skunk Party Palace under our house is the main roadside attraction.
It’s been three days since the last capture, so maybe that was the girl. We can hope.
Is there anything sweeter than getting behind the wheel of a brand new computer?
OK, I realize this is a very old-school attitude, but I’m at least partly serious — the process of upgrading from an OK old computer to a great new computer can still be pretty eye-opening.
I recently upgraded my primary home office desktop to a screaming-fast Maingear F131 workstation, and it’s a huge improvement. I wrote a review of it on the Maingear site.
I realize that Maingear is primarily known for high-end gaming desktops, but I’m no gamer. Instead, I am what used to be called a “power user” – a guy who uses a lot of applications that eat up a lot of computing power. In my case, it’s usually for some type of economic and statistical analysis on large and unwieldy datasets. My review gets into all the specifics of the new machine and my experience with Maingear (I recommend the product and the company highly), but here I want to talk about the economics of desktop computing.
For as long as I’ve been buying computers (my first was 28 years ago), I’ve believed that computers are ridiculously cheap, and that buying the best one you can afford is pretty much of a no-brainer. I’m amazed at how few others share this view, so let’s start with a justification based on performance and productivity.
Imagine I can choose to buy one of two computers — standard and high-end. Let’s say high-end saves me ten minutes of lost productivity per day (more on that below). That’s 50 minutes per week, or more than 40 hours per year. If I replace my main computer once every three years, that’s more than 120 hours saved be choosing high-end over standard. The value of my time (whether calculated on billing rate, take home pay, or any other reasonable measure) justifies paying a lot more than any real-world premium for a high-end computer.
In reality, this estimate is very conservative. I hear experts, pundits, and defenders of the conventional wisdom howling that this analysis makes no sense if “all you do is email and Web browsing and a few spreadsheets” or whatever. I submit that these experts have not done a lot of side-by-side comparison testing. Just to pick a simple real-world example, starting Excel and opening a relatively simple one-page spreadsheet can take 2-3 seconds on a standard new computer, and takes less than one second on my new high-end computer. Same with Word, and starting up Outlook, or opening a browser, etc. (And yes, I’ve done the same kind of tests with a standard Mac and a high-end Mac, and I’ve also tried Thunderbird and OpenOffice on Windows; the results are comparable.)
For me, the real benefit is not just saving two seconds a few hundred times a day. I also do some compute-intensive analysis from time to time in Excel and data-intensive work in both Access and SQL Server. A not-especially-huge-and-complex spreadsheet I’ve used in recent economic analysis projects takes more than two minutes to open and recalculate on a decent new computer. On my new Maingear box, that process completes in 45 seconds. That’s a huge difference! Yes, I can get up and make coffee while I wait two minutes, but when we’re in the throes of analytic work, we use many similar tools many times a day; even I don’t drink that much coffee. The performance difference with Access is even more pronounced.
Yes, this post has all been about personal productivity and not about using this new system as part of a Digipede Network grid; I’ll have more to say about that another day. But suffice it to say that a network of potent desktops like the Maingear F131 would make a very powerful grid indeed.
Tags: Grid applications · Usability
As many of you probably saw (thank you, Google News Alerts), Digipede has just released a new version of our award-winning grid computing software, the Digipede Network. Whew.
One of the most painful and joyful events in the life of a software company is the release of new software. While this seems like an obvious statement, let me just say to all my friends who are NOT in the software business — you have no idea.
Many customers, prospective customers, and industry observers shrug and even smirk at a press release that “merely” announces the release of a new version of an existing product. (Smirk away — here’s ours.) But it’s gratifying to receive congratulations from those who actually understand this process (thanks, friends at Microsoft!).
So now that the apparently-endless cycle of build and test is over, and the last (known) snafu has been fixed (how the %^$&$! did we put an uninstallable version of our SDK out on our community site?), we can take a deep breath, step back, and discuss what this release means — to our customers. Because, as our press release says, this is a software release entirely driven by our customers.
From the beginning, we set out to make the Digipede Network “radically easier to buy, install, learn, and use” than any other distributed computing platform. Reviewers say we’ve done that, and customers tell us they can come up to speed quickly with our software. Ah, but once a customer comes up to speed quickly, that customer gets ideas! “Why does Digipede use all the cores on each compute resource? Can we reserve one or more for other uses?” “When I try to delete thousands of jobs at once, weird things happen — are you guys just idiots or what?” “I thought you guys were supposed to be Microsoft-savvy; why can’t I host a .NET 4 application on your software?” “When we run millions of jobs with lots of really short tasks, the Digipede database gets really big — can you fix that?” You get the idea.
Well, to be honest, we never tested that “queue thousands of jobs while thousands of other jobs are running and then just delete the thousands that are queued” case, so yeah, weird things happened. Should be better now. And yeah, .NET 4 is a reasonable expectation from us — works fine now. Yes, it’s true that there are ways to make the Digipede database grow — and while we’ve always had tools for managing that, those tools are simpler and more useful now.
That multi-core thing turned out to be the most popular one though, and it’s been one of my pet issues for a while, so let’s talk about that in more detail. I’ve spoken at conferences, written articles, made videos, and given interview for years saying basically this:
- Mainstream developers know single-threaded object-oriented coding techniques, which take advantage of a single core.
- Meanwhile, chip makers are developing CPUs with more and more cores.
- The Digipede SDK is the simplest way for a mainstream developer to WRITE WHAT THEY KNOW (i.e., single-threaded object-oriented code) and EXECUTE that code on multiple cores on a chip, multiple chips in a box, and multiple boxes on a grid, all using the same programming paradigm.
And this has been great for us and for our customers — up to a point. For purely compute-intensive applications, this approach scales linearly in cores and machines up to hundreds and even thousands of multi-core compute resources. But many complex applications have a lot of I/O requirements as well, and just loading up (for example) an 8-core server (most likely, a dual quad-core box) with 8 cores worth of computation can actually slow down execution as processes wait for I/O.
So in the most recent release, we took a very simple brute-force approach to fixing this issue – we now allow users to “reserve” one or more cores per compute resource through a simple option in Digipede Control. Early users report excellent results, with 6 or 7 cores computing away while the remaining one or two handle all other chores (including I/O). Equally important, this approach is robust to additional increases in the number of cores per chip (which is forecast to reach several dozen within just a few years).
If you want to take the new version for a spin, ask for a free evaluation copy here.
Now, how about what’s NOT in our press release? Well, you won’t find the word “cloud” in there…
Is it just me, or has the cloud meme really jumped the shark? Look. I used cloud computing before it was called that, and I’ll use it after that name has wandered off into the scrapheap of forgotten marketing buzzwords. If a cloud is Google and a cloud is a cluster in a datacenter somewhere the user can’t see it, then a cloud is everything and nothing. If a cloud is Amazon or GoGrid, then sure, our customers can deploy the Digipede Network there, or they can deploy it on their own infrastructure (then, if they want to, they can tell their bosses they’ve built a “private cloud” for all I care!).
The market knows Digipede as a provider of distributed computing software for the Windows platform, and as a provider of high-productivity distributed computing tools for .NET developers. That’s our role in the cloud and on the ground and everywhere in between.
Tags: Cloud computing · Customer Service · Grid applications · Press coverage · Usability
Leaving the energy industry turns out to be harder than I thought.
When we started Digipede (more than 7 years ago!), my partners and I had spent more than a decade working together in the electric utility industry, and frankly were ready for something new. While some of the ideas that eventually became Digipede had been rattling around in our heads for years, we built the Digipede Network as a general-purpose grid computing framework, not a tool for electric utility IT departments. Indeed, while we knew grid computing was important in finance, military, biotech, and manufacturing applications, we didn’t think utilities would be particularly interested.
I guess you never know.
Over the past year, one of our hottest segments has been the electric utility industry. We now have customers in generation, transmission, distribution, and power marketing companies, running a variety of applications from risk management to market simulation models. We’ve had an opportunity to work with utility software giant Ventyx (recently purchased by even-more-giant ABB, the same ABB that bought our previous utility software company Energy Interactive — I think the world really is smaller than I realized…).
Our first bit of collaboration with Ventyx has involved adapting their energy planning and analytics software tool, PROMOD IV, to run on the Digipede Network. This has been an instant hit with utility customers. (OK, the phrase “instant hit” may not quite capture the pace of utility procurement processes, but you get the idea.)
PROMOD is a very detailed simulation model, and users often have to run thousands of scenarios — so projects can take days to complete on a single high-performance workstation. (Indeed, my first encounter with PROMOD was in the early 1980s, on a mainframe at Portland General Electric, but that’s another “small world” story…) Users tell us they end up walking from machine to machine starting multiple runs before going home at night — we call this behavior, which goes far beyond the utility industry, the “sneaker grid.”
Not surprisingly, “sneaker grid” users make GREAT Digipede customers, because (a) they know how inefficient and limited such manual work is, and (b) work is really piling up! Ventyx knows this too, and actually had a grid solution through Sun a few years ago — but nobody wanted to install a Sun grid for a single application when all their other infrastructure was on Windows. Opportunity knocks…
Now PROMOD IV users have a scalable solution that allows them to get order-of-magnitude increases in modeling throughput, using the tools and platform they already know and understand. Ventyx and Digipede worked together on a description of this solution, which can be found here.
So now I’m a grid computing guy AND a utility guy. Full circle.
Tags: Grid applications · Utility Industry
February 2nd, 2010 · 2 Comments
As has been widely reported, Microsoft is ditching the “Gold” designation for its partners. We’re OK with that — in fact, we thought we’d get ahead of the curve and ditch our Gold certification now.
It’s time to renew our Microsoft partner program membership (always an adventure, although somewhat easier than it used to be). Despite the remaining huge shortcomings of the program (don’t worry, I won’t repeat my one million earlier posts on this subject), we’ve decided to renew again. A quick look at the requirements showed we could easily renew at the Gold level, or the Plain Old Certified level, based on the number of “Partner Points” we have (or could accumulate by our renewal date).
BUT a more careful review showed two changes. One, the Gold designation will vanish partway through this renewal period for us. And Two, to achieve Gold, there’s a new requirement that we force our customers through yet another Microsoft “customer satisfaction” survey process. So in exchange for further inconveniencing our customers at Microsoft’s request, we get a tag that will be discontinued shortly? Only Microsoft (and frankly, only the Microsoft Partner Program group) could come up with a new anti-customer requirement just in time for a program to be phased out.
No brainer right? — no thanks.
OH, but you should HEAR the wailing and pleadings from the Partner group. “Do you REALLY want to give up ALL the benefits of being GOLD??” Ummm, you mean the ones you’ll supposedly be taking away this year anyway? Yes. “Do you REALLY want to renew at a REDUCED level?’ Ummm, you mean the level ALL Gold Certified Partners will have later this year? Yes. (I especially like that second argument, which I’ve heard both from humans on the phone and from the automated messages on the Partner Program Web site — “we really don’t think Gold is important, we’re phasing out the program, now we’re stressing ‘competencies’ over simple program level designations, but surely you don’t want to renew at the level of those unwashed masses beneath you?”)
So in any case, look for the handsome blue logo to replace the handsome gold logo we’ve been using, and look for no other differences whatsoever in our fine relationship with Microsoft and its customers. (Except of course for the slight improvement for our own customers — the ones we won’t be hassling with another request for “just 10 or 15 minutes” to fill out another meaningless survey from Redmond. You’re welcome.)
Tags: Partnering with Microsoft
Earlier this summer, we marked the one-year anniversary of the connection of our rooftop photovoltaic (PV) system. Last year I promised a more complete analysis of the economics of this system once we had sufficient history. Here goes.
First, let’s not bury the lead: Our PV system generated 36.5% of the electric energy used by our house in the past year, saving 52.8% of the money we would have spent on electricity. Further, we received a 7.7% return on our investment, tax free, and it looks like that will go up next year.
We bought our PV system from Borrego Solar last spring, while we were replacing our leaky roof. The main components were 18 PV Modules from Sanyo (about 188 Watts each), and one Xantrex power inverter (for converting DC power from the panels to AC power for the house). The installation went smoothly, and the system has operated without a hitch for the past year.
My calculation of our savings is based on several inputs. It’s possible to find the amount of energy supplied by PG&E (that’s on our meter, and on our bill every month). It’s possible to find the total energy supplied by our PV system (the cumulative energy produced to date is available from the LCD on our inverter). So the amount and percentage of energy supplied by the PV system is straightforward:
9,261 kWh supplied by PG&E + 5,323 kWh supplied by our PV system = 14,584 kWh total consumed by our house, so:
5,323 / 14,584 = 36.5%
So far, so good. Calculating the monetary savings is quite a bit more complex, since the good folks at PG&E and the California Public Utilities Commission have created residential rate structures that are as much about social engineering and income redistribution as they are about electricity, but let’s have a look.
We’re on the usual PG&E residential rate, E-1 (which can be read in all its glorious detail here.) The most important attribute of this rate is that the price per kWh increases with the amount of energy the customer uses each month. So you pay less for the first few kWh, more for the next few kWh, and so on until you’re paying A LOT more for your last kWh (especially if you have a big house, three kids, a pool, an air conditioner, multiple computers, electric oven, electric dryer, and so on.) The rates by “block” of energy consumed are:
Total Energy Rates ($ per kWh)
- Baseline Usage $0.11531
- 101% – 130% of Baseline $0.13109
- 131% – 200% of Baseline $0.25974
- 201% – 300% of Baseline $0.37866
- Over 300% of Baseline $0.44098
The “Baseline” amount varies by region, but it’s small relative to our household usage, so without the PV system, we usually end up using at least some energy in that last “Over 300%…” block, where the price is very high. (Indeed, this rate went up during the year, which complicates the analysis a bit, but it’s just more rows in my spreadsheet — no big deal. But to give you an idea of how wonderful it is to put up with PG&E as our utility, the price of the top block went up from 35.876 cents per kWh to 44.098 cents per kWh this year, an increase of about 23%. This year alone.)
This rate structure, known as “increasing block rates,” is California’s way of reminding everyone (again) that it’s expensive to live in California.
On the bright side, it’s also a great way to encourage conservation and the installation of alternative energy systems like our PV system.
As mentioned above, the PV system only generates about 36.5% of our total electric energy consumption — but it displaces our most expensive purchases from PG&E. Since we’ve installed the PV system, we’ve almost never had to buy any “top block” power from PG&E. By reducing our consumption of the most expensive power to essentially zero, we saved a much higher percentage of the money we would have spent on power — 52.8% to be exact.
Here’s the month-by-month breakdown:
||Bill w/o Solar
||Bill w/ Solar
|| $ 376.15
|| $ 274.72
|| $ 101.43
|| $ 289.27
|| $ 168.92
|| $ 120.36
|| $ 383.64
|| $ 215.87
|| $ 167.77
|| $ 225.34
|| $ 66.93
|| $ 158.41
|| $ 357.19
|| $ 127.64
|| $ 229.54
|| $ 362.92
|| $ 110.57
|| $ 252.35
|| $ 360.71
|| $ 109.27
|| $ 251.44
|| $ 240.86
|| $ 53.46
|| $ 187.40
|| $ 351.01
|| $ 149.98
|| $ 201.03
|| $ 234.80
|| $ 96.80
|| $ 138.00
|| $ 231.02
|| $ 120.18
|| $ 110.84
|| $ 305.59
|| $ 217.01
|| $ 88.58
But — is this a good investment? How does putting up a rooftop PV system stack up against other potential investments?
Well — after a substantial rebate from California and a substantial tax deduction from the US, the total cost of our PV system was almost exactly $23,000. A savings of $1,770 in one year amounts to a simple annual rate of return of 7.7%, tax free (i.e., we’re saving after-tax money). No tax-free fixed-income securities are paying close to that right now, although certainly a municipal bond is much more liquid than PV panels stuck to my roof.
Personally, other investment decisions I made early in 2008 did not go nearly as well as this one — negative returns were common. So I am happy with this investment at this time.
There is also good (?) news on the future value of this investment. As I mentioned, earlier this year the price of top-block power from PG&E increased significantly. When I plug in the new higher prices for all of the next 12 months, I estimate savings of more than $2000, for a return of about 8.7%. Further, these systems are becoming increasingly common throughout the Bay Area, so if we ever sell this house, we’ll likely recoup all or most of the original investment ( home appraisers are learning to incorporate the value of PV systems into the value of properties here in California).
Overall, I’m happy — our PV system has worked without a hitch, and has delivered a decent return on investment.
But I could be happier.
Our system is far too sensitive to shade, delivering less than half of rated capacity whenever even 5-10% of the system is in shade. There should be a simple engineering fix to this problem, and I’ll be investigating over the next few months.
Our system also has an almost useless user interface; the analysis presented above required a ridiculous amount of manual effort on my part to develop. For example, the only way to look at the performance of our Xantrex power inverter is to go outside and squint at the two-line LCD output, and bang on the side of the box (literally) to display a handful of statistics, the way our caveman ancestors did back in the 1970s when they did their earliest residential solar analsysis. For trivial cost, any of four vendors could have included all the technology necessary for me to capture real-time system performance data on my computer via my home network, but apparently among PG&E, Borrego, Xantrex, and GE (maker of my not-so-smart meter), nobody bothered to do so.
This interface issue is particularly frustrating, because anyone in the tech industry knows that this actually matters. If a PV system is a crude slab of silicon that sits on your roof and pumps electrons into your wires, it’s unappealing to many. If it’s an elegant system integrated with your network and life, it’s far more likely to become mainstream.
So in my humble opinion as an early adopter, solar power has passed a crucial threshold — it has become reasonably cost effective in the most expensive residential markets. But some simple technical and market innovations could really help it take off.
Tags: Utility Industry
I went to Microsoft’s Mountain View office last week, where I did an interview with William Leong, Microsoft ISV Evangelist. We talked about Digipede’s market, products, and the need for grid computing in businesses of all sizes. We even talked about IronPython, and how a last-minute addition to a recent version of our software has been driving new business for us.
The video of that conversation is now on Channel9; you can watch it here.
We’re offering developers who watch that video (and even those who don’t) a free copy of the Digipede Network Developer Edition — go to this page to get yours today.
Many thanks to William and the rest of the Microsoft Evangelists for giving us this opportunity to get the word out about how Digipede and Microsoft work together to make software run faster and scale bigger!
Tags: Grid applications · Partnering with Microsoft · Presentations · Press coverage
Who says there’s no good news for financial companies?
Penny Crosman provided some good news for banks, hedge funds, and other money managers in her article today in Wall Street & Technology — good news for financial developers and IT professionals who need to access more processing power without complex application re-engineering.
You can read the article for yourself — there are good quotes from AVM CTO Paul Algreen, a longtime Digipede customer — but from my perspective, the gist is this:
- CPUs are getting faster these days almost exclusively through putting more cores on a chip.
- Hence, when you buy a fancy new server, performance only improves for applications that take advantage of multi-core architectures.
- Yet most applications are single-threaded, leaving all but one core doing, umm, nothing.
- AVM noticed this problem more than two years ago, and started using the Digipede Network to address it.
- They’ve adapted compute-intensive legacy applications to run on a grid of multi-core boxes without expensive re-engineering, seeing huge performance gains.
- Thanks to the intuitive programming model offered by the Digipede Framework SDK, AVM has added more and more applications to the grid since then, and they haven’t looked back.
This is quite typical of the experience many Digipede customers have had — that for most applications in financial services, multi-core and grid computing can be handled most effectively as two cases of the same general distributed computing problem.
And yes, I’m going to plug our now-famous four-minute video on this topic again — you can watch it here. Then you can request a free evaluation copy of the Digipede Network, and try it out on your own compute-intensive applications. Because Intel and AMD aren’t waiting for the world to re-tool a few million enterprise developers; they’re banging out chips with more and more cores with every new generation.
But with the right tools, you can take advantage of all that power — and that’s a welcome dose of good news for Wall Street!
Tags: Grid applications · Press coverage · Usability
Just saw a good article in Dr. Dobb’s about multicore OO development by John Gross and Jeremy Orme of Connective Logic in the UK. A very different approach from Digipede’s; it may be possible to combine the two (haven’t dug any deeper yet).
For our now-classic discussion on a closely-related topic, you can start here.
I have blasted Microsoft (more particularly, the Microsoft Partner team) about their Partner Web site in the past, and was particularly vocal about the problems with the process of renewing our membership as a Gold Certified Partner in January, 2008. (You can see my rant here, and my follow-up rant here.)
In the spirit of giving credit where credit is due — kudos to the Microsoft Partner team for improvements to the re-enrollment process AND the stability of partners.microsoft.com. I recently re-enrolled Digipede as a Microsoft Gold Certified Partner, and the process went without a single hitch this year.
Clearly, there are still lots of improvements that can be made to the Partner Web site (my suggestions from last year are still relevant), but streamlining the re-enrollment process and improving the stability of the site are much appreciated. Thank you!
Tags: Partnering with Microsoft