The Cranky Admin

Hey Brother, Can You Spare a Server?

YellowDog's unusual take on providing capacity for large render projects.

What do you do if you have a whole bunch of servers that aren't being utilized? One belief system holds that you move your workloads into the cloud and then decommission your on-premises infrastructure. But there are lots of organizations that for regulatory, economic or even office politics reasons can't take that leap of faith. For these organizations, a U.K. startup from Bristol named YellowDog may be the answer to optimizing revenue.

The short version of YellowDog is this: you install their software on your system, and it uses your system's spare processing power as part of a SETI@home-inspired distributed computing swarm to render projects for commercial 3D and CGI clients. Commercial clients can purchase time on the swarm at a fixed price on a per-node, per-hour basis.

The concept piques my interest, though I admit to being skeptical. There are challenges to this being practical, and certainly challenges to it being economical enough to help keep on-premises ownership of IT infrastructure viable in the face of a growing public cloud. YellowDog's CEO Gareth Williams took some time out to answer my questions.

It Started With an iPhone
YellowDog started in a coffee shop at the top of Paddington station in London. Williams and others were thinking about the 64-bit CPU in their iPhone 6es and the gigabyte of RAM and thinking, "Wouldn't it be nice to monetize that?" They were thinking about SETI@home, and how many phones there were; the idea just sort of stayed with Williams over the years.

He got to thinking: how could make a commercial application out of a SETI@home-like distributed computing approach? He was talking to his brother-in-law who works at the University of the West of England (they have a very well-regarded animation course) about how there is never enough compute power available for rendering. Shortly thereafter, YellowDog was born.

The first proof of concepts showed that trying to do anything on mobile phones was pointless. They're too weedy and the chip architectures too diverse. But there is still a lot of unused computer hardware out there.

Licensing rendering software has been one of the biggest difficulties. How do you build a cloud when the software companies require you use a dongle? This is further complicated by the fact a huge part of the business is based around simplifying the outsourcing of rendering.

Today, outsourcing rendering is a bit miserable. Render farms are technically quite complex and artists don't want to fiddle with the IT side; they want it to "just work." From experience, I know that render environments can be quite touchy. Breathe on them hard and whole datacenters fall over.

That said, YellowDog has made headway in solving these problems. They have clients ranging from small local 3D shops to big-name video game houses, though the biggest demand is apparently coming from companies who do work on kids' TV shows.

The Economics of the Thing
If you'd asked Williams where he'd be getting his compute from two years ago, he'd have said "students." He figures they're cash poor, and have a lot of spare compute. Today he is turning more to public cloud providers, service providers and large businesses. This is mostly because of the ability to get large amounts of compute without having to build large numbers of individual relationships.

It's difficult to say exactly what to expect in terms of income from attaching a server to YellowDog. Williams estimates that a 16-core machine with 15 percent average utilization (thus offering its spare 85 percent to YellowDog) would generate about £1000 ($1,250 USD) per year. Clearly this isn't enough to justify setting up Bitcoin mining-like farms for YellowDog, but it more than pays for electricity and cooling of a server you already own and have to keep running for some reason.

Not all servers are equal, and YellowDog has its own scoring system for servers. Servers are tested and rated to determine how much they'll receive, as core counts, RAM, storage, networking and so on vary.

YellowDog is not currently accepting new server owners into the pack, and there is a waiting list for those interested. YellowDog has access to over 168,000 cores and doesn't yet have enough clients to keep the whole of the swarm consistently busy.

Despite this, there are occasions where YellowDog finds the existing swarm inadequate, and ends up bursting to the public cloud in order to meet demand. So far, this sort of outsourced render farming has proven to be very bursty, with Fridays apparently being the worst.

Future Plans
YellowDog is looking to extend its model to banks, engineering firms and other businesses with large batches that have to run. Anything that could help smooth the demand curve is especially welcome, but there's a good deal of work to be done yet.

As long as the batch application has some sort of API, in theory it can be slotted in. Of course, that's an oversimplification, and it takes work for each new application, but they're working on a framework and SDKs to enable devs to integrate their products and batches with YellowDog itself. Down the road are also considerations ranging from deep learning to biotech. Insert buzzwords as appropriate.

One thing YellowDog has run into is a requirement for building in locality awareness, encryption and security to deal with workloads that must operate under compliance restrictions. While this is already an issue for rendering jobs, Williams forsees a need to pay closer attention to this as they move into adjacent markets.

Banks, for example, could use YellowDog to burst first internally, using the unused compute in their own datacenters, and then burst to secured public clouds if that proved insufficient. Is any of this sounding familiar to anyone else, or is this just me? Once upon a time, this was the sort of thing we envisioned being able to do with hypervisor-based virtualization.

We outsource, we bring it back in house. On site, off site, round and round it goes. Keep an eye on the pendulum, as it might just be starting its swing back.

About the Author

Trevor Pott is a full-time nerd from Edmonton, Alberta, Canada. He splits his time between systems administration, technology writing, and consulting. As a consultant he helps Silicon Valley startups better understand systems administrators and how to sell to them.

Featured

Subscribe on YouTube