Previously, we discussed the meaning behind the various index scores reported by the Cadalyst Systems Benchmark. Next, we talked about operating the benchmark. In this, part 3 of our blog, we finish discussing the operation of the Cadalyst Systems Benchmark.
The final choice of the benchmark’s initial dialog box enables the compare option, which lets you save and compare times for different test runs. This is a powerful tool that we added to C2008 v5.1 to help us develop new tests for the benchmark itself. You can use it to easily see the effects (if any) for alternate configurations of your workstation. This option is disabled by default.
Compare Options Menu
You have six choices here. The first three concern the operation of the compare function: Save Current Test Times for Later Comparison, then EXIT; Compare Current Test Times to Previous Test Times; and Save Current Test Times and Compare to Previous Test Times.
The compare function lets you compare the times from two different test runs, creating a new set of relative index numbers. It calculates the index numbers based on the ratio of the test times from the first selected file compared to the test times of the second selected file. If nothing has changed and the two different test times are virtually identical, the new calculated index is approximately 1.00. Where something has changed, the new index number clearly shows the relative improvement. For example, an index number of 1.50 indicates that performance has improved by 50%. This is a handy method for directly quantifying the benefits of, say, using a RAID 0 configuration for your hard drives.
The fourth choice, Just Exit C2012, simply aborts the compare option. The last two choices deal with help dialog boxes: Enable Informational Alert Boxes and Disable Informational Alert Boxes—the default setting.
Informational Alert Boxes
If enabled, informational alert dialog boxes popup (using AutoLISP’s alert message function) to provide contextual help when saving and comparing different test results. These dialogs, one for each of the three compare options, guide you through the process of using the compare function until you are ready to disable them.
One More Option
There is a hidden option (disabled by default) that appends the actual times for each individual test, in seconds, to the end of the C2012_data.dta file. To enable this option, you must edit the C2012_indx.lsp file. You can edit this text file using Windows’ WordPad utility. To enable the option, just remove the two semicolons at the beginning of the line located near the end of the file, which reads: ;;(load “times”).
(Note: The AutoLISP interpreter ignores any code on a line after a semicolon.)
This wraps up our blog on the Cadalyst Systems Benchmark. We hope it helps you to evaluate and compare the performance of different workstations running AutoCAD.
Author: Art Liddle
Previously, we discussed the meaning behind the various index scores reported by the Cadalyst Systems Benchmark. Here, we will talk about operating the benchmark. For our discussion, we will use the latest version of the Cadalyst Systems Benchmark, C2012 v5.4, which was recently released.
The Readme_C2012.txt file (included in the Zip file available for download) gives instructions for installing the Cadalyst Systems Benchmark. They are straightforward, so we will not repeat them here.
The initial dialog box for the C2012 benchmark offers several radio-button options for customizing the test, as well as edit boxes for recording selected information about your workstation configuration. We will discuss each item in order, working our way down from the top of the dialog box.
Record Current System Configuration
This collection of six edit boxes prompts you for key information about your system’s configuration. We strongly suggest that you take full advantage of this section; record as much data as you can at the time of the test—up to a maximum of 132 characters per edit box. This information will make your life easier when you are pouring over the test data later. The benchmark stores your responses in the C2012_data.dta file, using them as defaults the next time you run the test. (If it hasn’t changed, you don’t need to retype information for each test.) The last edit box (Remarks) lets you record general notes to yourself. The benchmark automatically determines and records the following information: AutoCAD version, graphics window size in pixels, and the current date and time.
This is where you choose which type of test to run: 3D/2D/Other Functions (the only option that includes the Total Index score), 2D/Other Functions, or 3D Functions Only. Depending upon the type of work you do with AutoCAD, you can save yourself a lot of time by choosing either of the latter two options. In addition, if you do not have at least a midrange 3D graphics card, skip the 3D test—it can take several hours to run with a low-end card.
Number of Test Loops
For our reviews, we typically choose to run three loops of the test; the benchmark automatically calculates and reports average scores and times. For the record, we have found that the scores from a single run of the test closely match the average scores from three iterations. New to the C2012 version: We added a Battery Rundown Test option, which runs 99 loops. As the name indicates, we use this option for measuring the battery performance of mobile workstations.
Next time, in part 3 of our series, we will discuss the Compare Option. Originally added to the Cadalyst Systems Benchmark strictly for our internal use, it proved to be so handy, we enabled it for everyone.
Author: Art Liddle
Merriam-Webster’s online dictionary lists one definition for benchmark as: a standardized problem or test that serves as a basis for evaluation or comparison.
The Cadalyst Systems Benchmark is designed to help you evaluate and compare the performance of workstations running AutoCAD. Comparing the performance scores between workstations (or different configurations) will help you make intelligent choices when purchasing a new computer or upgrading an existing one.
The Cadalyst Systems Benchmark reports a total index score and four component index scores keyed to specific performance areas, as well as individual numbers for each subroutine of the test. Note: the index numbers are simply a ratio of the base time for an operation compared to the current test time for an operation. Larger index numbers indicate better performance.
Total Index: This is an average of the four component indexes: It gives a quick look at the overall performance for a workstation. The Total Index score is the number we focus on for our reviews, but you can dig a little deeper for additional performance information relevant to your specific requirements.
3D Graphics Index: This is closely tied to a workstation’s graphics card. Depending on the rendering mode you typically use when working with 3D models in AutoCAD, you may want to focus on just one of the 3D graphics subcategories: Wireframe, Hidden, Conceptual, and Realistic. If you don’t work with 3D models, you can safely ignore this index all together.
2D Graphics Index: This measures more than just 2D graphics performance: It effectively measures all onscreen performance that does not involve rendering a 3D model. This component of the test creates, copies, blocks, moves, arrays, changes layers, changes colors, explodes, and erases three different types of objects: orthogonal lines, radial polylines, and text. Zoom and Pan commands are sprinkled throughout each test. Don’t pay any attention to the individual subcategories for this one, what matters is the total score.
Disk Index: This measures a workstation’s performance for reading and writing files to the hard drive. Most of the drives on the market today provide similar performance. What will make a big difference is having a pair of drives in a RAID 0 configuration. For RAID 0, file operations are simultaneously split between the two drives, nearly doubling performance.
CPU Index: This gauges the performance of the central processing unit at the heart of each workstation. It has proven to be an accurate measure of relative performance, especially with the turbo-mode of the new generation CPUs.
Author: Art Liddle
Longitude Media, publisher of Cadalyst, developed this blog devoted to helping CAD users, CAD managers, and IT personnel optimize hardware for 3D CAD applications. Blog posts are developed by contributors who are experts in the area of professional hardware, 3D CAD software, or other related topics, including current bloggers, consultants, software resellers, freelance writers, and even CAD users/managers/IT personnel themselves.
|Patrick Hughes is author of the CADspeed series, “A CAD Dinosaur’s Journey into Modern Times.” A machine designer and owner of Engineered Design Solutions in Rockford, Illinois, Hughes has worked with AutoCAD since 1991. He has developed a number of AutoLISP and other software solutions to automate his workflow and increase productivity, including the commercially available time tracking program, CadTempo.||
|Tony DeYoung is community manager for FireUser.com, a web site focused on workstation graphics for professional CAD and visualization. Previous to this he was responsible for developing the community and vendor support for OpenGL on OpenGL.org, OpenCL on Khronos.org, and Acrobat 3D from Adobe.|
|Curt Moreno is the owner and editor of Kung Fu Drafter, a blog that is CAD centric and geek peripheral, and a CAD manager at a Houston, Texas-based engineering firm. He has recently begun to contribute to Cadalyst and AUGI and will speak at Autodesk University 2011. Curt currently lives in Houston where he enjoys spending time with his dog and horses. Reach him at www.kungfudrafter.com or follow him on Twitter at @WKFD.|
|Art Liddle is a former editor-in-chief of Cadalyst. He currently lives in Eugene, Oregon, and teaches physics at Springfield High School. He created and regularly updates the Cadalyst Systems Benchmark.|
|Brian Benton is a long-time contributor to Cadalyst and a services provider for CAD, GIS, 3D modeling, rendering and animation. Read his blog at CAD-a-Blog.com.|
|Alex Herrera is a consultant focusing on high-performance graphics and workstations, and he has more than 25 years of engineering, marketing and management experience in the semiconductor industry. Author of frequent articles covering both the business and technology of graphics, he is also responsible for the Workstation Report series, published by Jon Peddie Research.|
|Shaun Bryant – Training ConsultantOriginally from a civil/structural engineering background, Shaun has more than 24 years of hands-on experience with AutoCAD as a user and a CAD manager and has worked in the Autodesk channel in sales, support and training.He has a varied background, comprising of eleven years as a CAD & FM consultant/trainer, three years in sales, pre-sales and business development and nine years industry experience as a CAD manager/user. Shaun also provides consultancy, bespoke training courses, manuals and video learning for clients of his own business, CADFMconsultants.He is currently a Director on the Board of Autodesk User Group International (AUGI). He is also Vice-Chairman of the Autodesk UK/Ireland Authorised Training Centre Advisory Board (ATCAB) and a member of Autodesk’s Secondary Education Advisory Board.Shaun is author of the reputable CAD blog, Not Just CAD! and co-owner of www.CADucation.com.|
|Ron LaFon was a contributing editor for Cadalyst and the magazine’s principal hardware reviewer for 17 years. He passed away in May 2011.|
|Mark Shaw and James Ecklund are co-owners of Stored Technology Solutions, Inc., an IT company that provides and supports hardware and software solutions designed specifically for business — helping make Information Technology solutions work for your company, instead of working around your IT infrastructure.|