- Mini Course on BlynkPosted 16 hours ago
- Creating a Unique Electronic Musical Instrument: The Sound WallPosted 3 days ago
- Building a Laser MicroscopePosted 3 days ago
- Grand Piano Keys with ArduinoPosted 6 days ago
- Wireless Power TransferPosted 7 days ago
- Robot Punchers with ArduinoPosted 1 week ago
- A minimal 3D-printed scalePosted 1 week ago
- Expanding the pins of a microcontrollerPosted 2 weeks ago
- Let’s create a small level with a matrix displayPosted 2 weeks ago
- ChatGPT: Writing Code with Artificial IntelligencePosted 2 weeks ago
Open Compute Project: open design applied to data centers
Have you ever thought about what would happen if data centers could share their specifications across companies to build a perfect infrastructure? I thought I was dreaming, when I realized that this already exists (and it is very good indeed): the Open Compute Project is out there just for this purpose, and I liked very much their effort to standardize technologies in an open way.
The Open Compute Project was born in 2011 inside Facebook, to achieve a simple target: to build the most efficient data center ever thanks to the best technology and the lowest possible cost for a company. Engineers were so smart to decide to release all of the stuff collected by the team, building the infrastructure from the ground up, under an open hardware license, to make all of this available for other companies and let them contribute to these useful specifications.
The term “open hardware” in my humble opinion doesn’t have very much to do with this project, because yes – you have a paper to build your own server, but the goal of the whole Open Compute Project is to cover (and it covers them all, oh yes) all of the stuff present in a data center. Even, well, walls.
I said “walls” because Open Compute Project, as you can see yourself, is not a traditional open source / open hardware project where you see men (and women) opening stuff like their software or their latest board. Opening an entire data center means that you must be open to the root: running open source software on open boards can be easy, but people can do more, and a certification system such as that offered by the Open Compute Project takes the “open” issue beyond every code-related and board-related topic.
Open to the rack
Open Compute Project offers a wide set of tools to build your own data center: you can indeed download a PDF for every essential part of the structure and replicate it yourself, or buy certified hardware (and machines) to do that job. This means you have certified motherboards and CPUs, but you can have certified racks too, and instructions to build your own state-of-the-art rack for your servers. The rack is, to me, one of the most important parts of all the infrastructure designed by Facebook and Open Compute Project, because thanks to their Open Rack you can deploy machines in a collocated facilities without the need to build an entire room from the ground up.
So, in case of a from-scratch build of a data center, you can download models to set up a nice and less-consuming server room. If you don’t need such a large and money/time expensive structure, you can always take the Open Rack and place it in a collocated facility.
This is the first reason for the Open Rack for being so important: it allows you to deploy OCP certified hardware in a non-certified center. The second reason is that, inside the Rack, we can see how companies can cooperate to build a better world: as a matter of fact, Intel has submitted a design guide that provides an overview for implementing an intra-rack state-of-the-art scheme, using a New Photonic Connector to deliver, substantially, architectural benefits. That is why open design can win even talking about buildings and server farms: building the most efficient farm at the lowest cost is an important mission for a bunch of reasons.
While I believe a tool like the Open Rack is essential to this project (as a showcase of collaborative data center building too), the second component that amazed me was the storage category, with its software projects and hardware projects, like the Open Vault. From the official website:
The Open Vault is a simple and cost-effective storage solution with a modular I/O topology that’s built for the Open Rack. The Open Vault offers high disk densities, holding 30 drives in a 2U chassis, and can operate with almost any host server. Its innovative, expandable design puts serviceability first, with easy drive replacement no matter the mounting height.
Clearly, you can download all of the stuff from the official page to make your own Open Vault and to set up your storage system exactly like Facebook or like all the other companies that adopted this kind of hardware.
What you can do with a better data center
Well, once you have built your own data center following Open Compute Project guidelines and designs, what should you do?
First, you can improve your own structure to achieve better results with your company and submit them to the Project for review. In this way your company will gain visibility, importance, and will definitely contribute to a better world. Why? Well, because we built such so-consuming farms of machines, and drained all the resources from our planet, while it was dying. In this way we can get our planet back, and build a sustainable model around the web, that today consumes very much energy. OCP-powered rooms, buildings, farms, are able to rehabilitate humans as power consuming beings. We can do better and better at power management, collaborating to an open design for what powers the web as the most important structure built by the mankind.
Need other reasons? Well, you and your company, in this way, are freeing yourself by the vendor lock-in syndrome that continues to affect someway the hardware on the server market. Open design applied to data center is identical to open design applied to softwares and sources: a way to survive in the technology era having not to pay an amount for this, this much for that. You can replace costs with crafting time, to achieve an even better final result re-usable by others.
But let’s talk about numbers: as the official site states, with an Open Compute Project data center you can build a structure that is 38% more efficient and 24% less expensive to build and (and run) than others. Facebook says its OCP-stocked Prineville datacentre has a power usage effectiveness (PUE) of 1.09, which is better than the US Environmental Protection Agency’s best-practice rating of 1.5. A big win for a stakeholder like Facebook that needs so much storage for the whole social graph dataset. And clearly, if you are a web or a hosting company, you can lower the cost of your entire infrastructure in terms of power and maximize your ROI in terms of money. But putting aside the discussion about profits, who can afford this? Who will adopt a totally open infrastructure?
The adoption rate of Open Compute Project
Mark Roeknig, COO of Rackspace, less than a year ago said:
We will see significant Open Compute infrastructure by that time, I think we’d be approaching between 35 and 50 per cent of new installations of servers.
Well, that is a good news, huh? Because if the COO of Rackspace says something like that, surely he shows an interest of the corporate landscape related to technology in this kind of innovation, based on open design that aims to a more sustainable infrastructure for all the web. Mainly, because adopting such a strategy means spending less and gaining more. And we know that business men like very much money. The time meant by Roeknig was three years; so we can expect such a grow of Open Compute Project with a substantial raise of contributions. The highest will be this adoption rate, the better all the structure – that powers the entire web – will be.
Unfortunately, right now we can’t say if Open Compute Project will be a huge mess or a huge success, but we can certainly say that this is a very huge occasion for open design, to show what can be done in a different and strange area like data center design.