Hard drives, and SA-SCSI drives especially, face growing competition from a new breed of storage device: the solid-state drive (SSD).
An SSD stores data in solid-state memory — that is, SRAM chips — rather than on conventional hard disk platters. Today’s SSDs are large enough to be useful, and although not exactly economical, have come down enough in price that they can enter the conversation when it comes to outfitting a new workstation.
The advantage of SSDs? There are several, including less noise and better reliability in the face of environmental issues like vibration. Unlike the HDD, the SSD has no moving parts. But the real motivation to choose SSD is performance. More specifically, it’s about much lower latency, the time that lapses between asking the drive for data and receiving it. The SSD doesn’t necessarily offer a big benefit over hard drives in bandwidth — how quickly the data comes once it starts coming — but it eliminates the seek time for the hard drive’s head, delivering an indisputable advantage in access time. The downside is a glaring one: price.
Given the pluses and minuses, CAD users who have a slightly higher but not unlimited budget can entertain the option of SSDs in one of two ways. A combination of HDDs and SSDs in multiple drive bays — in particular, a smaller SSD with your OS installed paired with a large conventional disk drive for data — is very practical. Or choose a hybrid drive that combines the best of both worlds. This emerging technology is effectively a two-tiered memory device that implements its bulk storage on the cost-effective hard disk while implementing a much smaller, but much lower-latency cache on SSD. For frequently accessing reasonably sized chunks of data, you get the speed benefit of SSD without breaking the bank. Whereas an SSD currently commands ten times the price (or more) per gigabyte of a conventional 7,200-RPM HDD, the hybrid drive is a relative bargain at approximately twice the price (although the premium and the performance boost will vary by model).
The bottom line on selecting storage: Buy a lot more than you think you need, especially if you’ve chosen a system that limits you to one or two drive bays.
I recently read an article by an Intel product manager on the need for “ECC” (error correction code) memory in CAD workstations. From the article: “Corrupted data can impact every aspect of your business, and worse yet you may not even realize your data has become corrupted. Error-correcting code (ECC) memory detects and corrects the more common kinds of internal data corruption.”
For some reason this triggered my memory of the sudden-acceleration Toyota Prius incident from 2010. The popular press latched on to the idea that cosmic rays were screwing with the electronics in the Prius. While theoretically possible, the probabilities of this were astronomically low. It did however, make for a great story and the FUD (fear uncertainty doubt) caused Prius prices to temporarily plummet and sales come to a crawl.
Back to ECC memory and CAD systems. Is there really a need for ECC memory in CAD or is it just FUD marketing to upsell hardware and make products sound more valuable than they really are? I decided to do a little research.
Who needs ECC memory and what is its role in professional & CAD workstation computing?
Naturally occurring cosmic rays can and do cause problems for computers down here on planet Earth. Certain types of subatomic particles (primarily neutrons) can pierce through buildings and computer components and physically alter the electrical state of electronic components. When one of these particles interacts with a block of system memory, GPU memory or other binary electronics inside your computer, it can cause a single bit to spontaneously flip to the opposite state. This can lead to an instantaneous error and the potential for incorrect application output and sometimes, even a total system crash. However, the theoretical chances of a single bit error caused by a cosmic ray strike on your PC or workstation’s memory is fairly rare — only about once every 9 years per 8GB of RAM, according to recent data.
ECC technology — used as both system RAM, and in devices such as high-end GPUs — can reliably detect and correct these errors, reducing the odds of memory corruption due to “single bit errors” down to about once every 45 years for 8GB of RAM. Of course, just like everything else in life there are always tradeoffs. ECC memory is typically up to 10% slower and significantly more expensive than standard non-ECC memory.
Because the odds of a cosmic ray strike increase in direct proportion to the physical amount of memory (and related components) inside a computer, this is a real concern for large scale, clustered supercomputing and other environments where computing tasks often include high-precision calculation sets that can take days or even weeks to complete. In the case of supercomputer clusters, which often contain hundreds or even thousands of connected computer nodes and terabytes of memory, the odds of cosmic ray strikes on the system are much more likely — and much more costly. Restarting a week-long calculation on a supercomputer can cost a facility many tens of thousands of dollars in lost time, electricity and manpower —not to mention lost productivity.
But for even very beefy PC CAD workstation configurations with loads of RAM on board, you are probably not at imminent risk from problems caused by cosmic ray strikes and the resulting single bit errors. Over the course of your work, you are much more likely to endure system crashes or application hangs dues to failing components, power fluctuations and software bugs than due to cosmic ray strikes. Additionally, many applications in the desktop design and engineering space can actually endure a single bit error without negatively impacting the computing process or product. For example, if the color or brightness of a single pixel on a display monitor is changed due to this type of memory corruption on the system’s GPU, nobody will ever see or notice it. There are many such examples of this type of error not really impacting ones everyday work.
This said, many leading technology manufacturers are enabling their high-end products with ECC memory for compute-heavy (especially clustered supercomputing) applications where the benefits of using error correcting memory outweigh any comparative speed/cost drawbacks. AMD for example, has engineered their new AMD FirePro W9000 and FirePro S9000 ultra-high-end GPU cards to include ECC memory which can selectively be enabled by the end user and used for many advanced computing purposes where rock-solid stability and protection from space rays is crucial.
Author: Tony DeYoung
The longtime, tried-and-true hard drive remains the backbone of a workstation’s storage subsystem, but a new breed of solid-state technology is pushing its limits. Although they share the same basic technology as their ancestors, today’s drives are much bigger, faster, and cheaper. Traditional workstation hard-disk drives (HDDs) primarily come in a 3.5″ form factor, supporting SATA or SA-SCSI standards.
Essentially the same models that ship in corporate and consumer branded PCs, SATA drives are less expensive, sometimes dramatically so. (A terabyte for $50, anyone?) Pricing increases with drive capacity and RPM, an indication of how quickly the mechanical platter can spin within the drive and therefore how fast the drive can read and write data. The least-expensive SATA drives support 7,200-RPM speeds, while the highest-performance options jump to 10,000 RPM.
The second HDD option, the SA-SCSI drive, requires a motherboard interface that is also compatible with SATA drives (whereas a SATA interface will not support an SA-SCSI drive). With SA-SCSI, you’ll get the option to move up to 15,000 RPM, but you’ll sacrifice capacity and cash.
The Choice Between Speed and Capacity
Whether you choose a SATA or SA-SCSI drive, you will generally face a trade-off between paying for more RPMs or paying for more capacity, because buying both can be costly. Most CAD professionals would opt for capacity and costeffectiveness, because running out of space or money is usually a more glaring roadblock than facing modest shortages of access speed and disk bandwidth. Many of us are paranoid about running out of disk space — and we all should be to some degree, because data piles up faster than we think it will. If this describes you, consider purchasing extra drive bays that bring more room to add drive capacity later — although you can always fall back on external drives to shore up capacity down the road.
Reality capture is a boom business for the building industry. With roughly 5 million existing commercial buildings in the United States alone, it’s easy to understand why. Laser-scanner-based reality capture is the dominant methodology used today to accurately capture the 3D state of an existing building. However, the typical laser-scan-based point cloud is in the hundreds of millions of 3D points, sometimes even going into the billions of points. With this additional data overhead on top of an already dense Building Information Model, it’s important to optimize your workstation hardware to deliver a productive user experience.
Finding the Bottleneck
Under the hood, Autodesk Revit utilizes the PCG point cloud engine to rapidly access the 3D points contained in point cloud and retrieve points to be displayed in the current Revit View. Since the typical point cloud dataset is so large, a workstation’s RAM is insufficient to be used as the means for access by the PCG engine in Revit. Instead, the disk drive is used for access, while a small amount of System RAM and Video RAM is used for the current Revit View. Thus, the hard drive is commonly the limiting factor for point cloud performance, rather than system RAM, CPU, or GPU.
Learn the Options
With data access a common limiting factor to the performance of the Revit point cloud experience, let’s discuss the options available to deliver the best experience. There are two primary types that are found today: spinning platter and solid-state drives.
- Spinning platter drives are the traditional hard drive technology, and are found in most computers today, as they deliver the best balance of storage capacity, read/write access speed, and cost.
- Solid-state drives (SSDs) are relatively new technology, contain no moving parts, and are generally much faster at reading and writing data than typical spinning platter drives.
In a structured comparison completed by the Revit product team, we found the following results when comparing typical versions of these Disk Drive types:
Reap the Benefits
Based upon this investigation, we would highly recommend that those looking to optimize their Revit workstations for point cloud use install an SSD for at least the local storage of the point cloud data. While you will also achieve additional benefits from running the entire OS on your SSD, a significant performance boost can be achieved through the retrofit of a ~$200 SSD to an existing workstation.
Author: Kyle Bernhardt, Product Line Manager, Autodesk Building Design Suite