AnandTech Article Channel |
- Sigma Designs Updates Z-Wave SoC Portfolio for Affordable Home Automation
- Hardware Tricks: Can You Fix a Failing Mobile GPU with a Hair Dryer?
- Inside AnandTech 2013: Power Consumption
- NVIDIA and Continuum Analytics Announce NumbaPro, A Python CUDA Compiler
- ReadyNAS 100, 300 and 500 Series Reboots Netgear's SMB NAS Lineup
Sigma Designs Updates Z-Wave SoC Portfolio for Affordable Home Automation Posted: 19 Mar 2013 01:00 AM PDT The rise of connected devices has brought about an increased interest in home automation amongst consumers. Readers looking for a brief background on the various home automation (HA) technologies can peruse our primer piece from last year. HA technologies have been around since the 1970s, but the costs (mainly due to the technology's complexity and the necessity for custom installers) have kept it out of the reach of the common man. However, the usage of Wi-Fi in HA devices has suddenly made the technology more accessible. Sigma Designs is known for its video decoder chipsets, but they have been trying to transform into a one-stop shop for 'powering the new digital home' by making some strategic acquisitions. One of these was the 2008 purchase of the Danish company, Zensys, responsible for creating the Z-Wave home control technology. Sigma Designs is announcing the fifth generation of Z-Wave SoCs today. The cost of the SoCs has gone down compared to the previous generation. Sigma claims improved RF performance and lower power consumption compared to previous generation products. Platform developers now also have more memory in the SoCs to work with. With this generation, the company is taking extra steps to ensure better returns for their customers. by providing customizable reference designs and enabling faster time-to-market for end products. A feature-heavy middleware stack is also being supplied. Z-Ware and Z-Ware Apps form the APIs and customizable UI designs for multiple platforms. ZIPR is the reference design to handle translation between an Z-Wave and IP network and the Z-Wave mesh network. Z/IP Gateway is the gateway reference design (transforming IP commands to Z-Wave commands and vice versa) while UZB is a reference design to enable Z-Wave functionality over a USB port. The SoCs being introduced today are the SD3503 Z-Wave serial interface for use in HA gateways and the SD3502 general purpose SoC for use in HA devices. The ZM5101 and ZM5202 are modules integrating these SoCs with integrated and pre-FCC / CE approved RF designs for faster time to market. An essential aspect of today's introductions is the fact that these are backward compatible. So, existing Z-Wave controllers should be able to interface with HA products using the new Z-Wave SoCs. The combination of Z-Wave, ZigBee and Wi-Fi will rule the HA space for the next few years and Sigma's new platforms ensures that Z-Wave will continue to stay relevant. | ||||||||||||||||||||||||||
Hardware Tricks: Can You Fix a Failing Mobile GPU with a Hair Dryer? Posted: 18 Mar 2013 10:45 PM PDT Over the years, I’ve encountered my fair share of hardware failures while writing for AnandTech. For example, nearly every SFF I reviewed back in my early days failed within a couple years (usually a dead motherboard); Both of the first AM2 motherboards I reviewed also died within six months. I’ve seen more than a few bad sticks of memory, particularly overclocking RAM that couldn’t handle long-term use at higher voltages. And let’s not even talk about hard drives—lately I’ve noticed an uptick in the number of people coming to me with laptops that have a dead hard drive; so far I’ve only managed to successfully recover data from one drive using the famous (infamous?) “put your hard drive in the freezer” trick. Needless to say, when a friend came to me with an old Gateway P-6831 FX from early 2008—a laptop I awarded a Gold Editors’ Choice award to, no less!—and it was giving him a “Code 43” error on the GeForce 8800M GTS graphics, I didn’t have much hope of fixing the problem. Still, five years out of a $1300 gaming notebook isn’t too bad, and when I saw some suggestions online that I might be able to fix the GPU by putting it under the heat of a hair dryer for a couple minutes, I figured, “What do we have to lose?” Well, what we had to lose was about four hours of my time, as this particular notebook is something of a pain to disassemble down to the GPU. But in the interest of testing out the “hair dryer” trick, I though it worth a shot. Here’s the video footage of the process.
Much to my surprise, all of the effort proved worthwhile, at least in the short term. Most fixes of this nature will only prolong the lifetime of failing hardware, but if you can get another several months—or dare we hope for a year?—out of a laptop with such a simple solution, that’s pretty good. I did take a moment to at least do a quick check of graphics performance. Five years ago, the 8800M GTS was one of the fastest mobile GPUs on the block—surpassed only by the more expensive 8800M GT and 8800M GTX. 64 DX10 CUDA cores running at 500MHz might not seem like much, but the 256-bit memory interface (clocked at 1600MHz) is nothing to scoff at. And what sort of performance does the 8800M GTS deliver? Even when paired with a now-decrepit Core 2 Duo T5450 (1.66GHZ), the notebook still managed a reasonable score of just under 7000 in 3DMark06. To put that in perspective, however, Intel’s HD 4000 with a standard voltage mobile CPU now manages around 7500. Of course, 3DMark06 optimizations are pretty common, but we’re basically looking at top-end mobile GPU performance from five years back now being found in Intel’s IGP. When Haswell launches in a few months with GT3 and GT3e mobile parts, we’ll likely see IGP performance start to encroach on decent midrange GPUs like the GT 640M and HD 7730M—at least, that’s what I’m hoping to get! Anyway, if you’ve got a failing GPU or other component and you’re at the point where you’re ready to throw it in the trash, if you’ve got a bit of time you might give this hair dryer trick a shot. I’ve seen others recommend baking a GPU PCB in the oven at 200F for eight minutes, and while that could work as well it seems more likely to burn out some other component if you’re not careful. Sadly, this trick (and the freezer trick) both failed on another recent HDD failure; next up on my list of hardware tricks to try: transplanting a dead HDD’s platters into a working drive. Wish me luck; my dad’s data needs it!
Gallery: Gateway P-6831 FX Repairs | ||||||||||||||||||||||||||
Inside AnandTech 2013: Power Consumption Posted: 18 Mar 2013 11:11 AM PDT Two of the previous three posts I've made about our upgraded server infrastructure have focused on performance. In the second post I talked about the performance (and reliability) benefits of going with our all-SSD architecture, while in the third post I talked about the increase in CPU performance between our old and new infrastructures. Today however, it's time to focus on power consumption. Our old server infrastructure came from a time where power consumption mattered, but it hadn't yet been prioritized. This was before Nehalem's 2:1 rule (2% perf increase for every 1% power increase), and it was before power gating. Once again I turned to our old HP DL585 server with four AMD Opteron 880s (8-cores total) as an example of just how things have changed. As a recap, we moved from the DL585 (and over 20 other 1U, 2U and 4U machines with similar, or slightly newer class processors) to an array of 6 Intel SR2625s (dual-socket 6-core Westmere based platforms), with another 6 to be deployed this year. All of our previous servers used hard drives, while all of our new servers use SSDs. The combination resulted in more than a doubling of peak CPU performance, and an increase in IO performance of anywhere between a near tripling to over an order of magnitude. Everything got better, but the impressive part is that power consumption went down dramatically:
With both machines plugged in to a power outlet but both completely off, the new server already draws considerably less power. The difference at idle however is far more impressive. Without power gating and without a clear focus on minimizing power consumption, our old DL585 pulled over 500W when completely idle. It shocked me at first, but remembering back to how things used to be back then it stopped being so surprising. There was a time when even our single socket CPU testbeds would pull over 200W at idle. Under heavy integer (7-zip) and FP (Cinebench) workloads, the difference is still staggering. You could run 2.5 of the new servers in the same power envelope as a single one of the old machines. The power consumption under heavy IO needs a bit of explaining. We were still on an all 3.5-inch HDD architecture back then, so we had to rely on a combination of internal drives as well as an external Promise Vtrak J310s chassis to give us enough spindles to deliver the performance we needed. The 693.1W I report above includes the power consumption of the vTrak chassis (roughly 150W). In reality, all of the other tests here (idle, 7-zip, Cinebench) should include the vTrak's power consumption as well since the combination of the two were necessary to service the needs of the Forums alone. With the new infrastructure everything can be handled by this one tiny 2U box. So whereas under a heavy IO load our old setup would pull nearly 700W, the new server only needs 170W. Datacenter power pricing varies depending on the size of the customer and the location of the datacenter, but if you were to assume roughly $0.10 per kWh you'd be talking about $459 per year (assuming 100% idle workload) for our old server compared to $92.50 per year for the new one. That's a considerable savings per year, just for a single box - and assuming the best case scenario (also not including the J310s external chassis). For workloads that don't necessarily demand huge increases in performance, modernizing your infrastructure can come with significant power and space savings (not to mention a positive impact to reliability). Keep in mind that we're only looking at a single machine here. While the DL585 was probably the worst example from our old setup, there over a dozen other offenders in our racks (e.g. dual socket Pentium 4 based Xeons). It's no wonder that power consumption in datacenters became a big issue very quickly. Our old infrastructure at our old datacenter was actually at the point where we were power limited. Although we only used a rack and a half of space we had to borrow power from adjacent racks because our requirements were so high. The new setup not only allows us better performance, but it gives us headroom on the power consumption side as well. As I mentioned in my first post, we went down this path back in 2010 - there have been further power (and performance) enhancements since then. A move to 22nm based silicon could definitely help further improve things. For some workloads, this is where the impact of microservers can really be felt. While I don't see us moving to a microserver environment for our big database servers, it's entirely possible that the smaller, front-end application servers could see a power benefit. The right microprocessor architectures aren't available yet, but as Intel moves to its new 22nm Atom silicon and as ARM moves to 20nm Cortex A57/A53 things could be different. | ||||||||||||||||||||||||||
NVIDIA and Continuum Analytics Announce NumbaPro, A Python CUDA Compiler Posted: 18 Mar 2013 06:00 AM PDT As NVIDIA’s GPU Technology Conference 2013 kicks off this week, there will be a number of announcements coming down the pipeline from NVIDIA and their partners. The biggest and more important of these announcements will be Tuesday morning with NVIDIA CEO’s Jen-Hsun Huang’s keynote speech, while some other product announcements such as this one are being released today with the start of the show. Starting things off is news from NVIDIA and Continuum Analytics, who are announcing that they are bringing Python support to CUDA. Specifically, Continuum Analytics’ will be introducing a new Python CUDA compiler, NumbaPro, for their high performance Python suite, Anaconda Accelerate. With the release of NumbaPro, Python with be joining C, C++, and Fortran (via PGI) as the 4th major CUDA language. For NVIDIA of course the addition of Python is a big deal for them by opening the door to another substantial subset of programmers. Python is used in several different areas; though perhaps most widely known as an easy to learn, dynamically typed language common in scripting and prototyping, it’s also used professionally in fields such as engineering and “big data” analytics, the latter of which is where Continuum’s specific market comes in to play. For NVIDIA this brings with it both the benefit of making CUDA more accessible due to Python’s reputation for simplicity, and at the same time opening the door to new HPC industries. Of course this is very much a numbers game for NVIDIA. Python has been one of the more widely used programming languages for a number of years now – though by quite how much depends on who’s running the survey – so after getting C++ under their belts it’s a logical language for NVIDIA to focus on to quickly grow their developer base. At the same time Python has a much larger industry presence than something like Fortran, so it’s also an opportunity for NVIDIA to further grow beyond academia and into industry. Meanwhile, though NumbaPro can’t claim to be the first such Python CUDA compiler – other projects such as PyCUDA have come first – Continuum’s Python compiler is setup to become the all but defacto Python implementation for CUDA. Like The Portland Group’s Fortran compiler, NVIDIA has singled out NumbaPro for a special place in their ecosystem, effectively adopting it as a 2nd party CUDA compiler. So while Python isn’t a supported language in the base CUDA SDK, NVIDIA considers it a principle CUDA language through the use of NumbaPro. Finally, NVIDIA is also using NumbaPro to tout the success of their 2011 CUDA LLVM initiative. One of the goals of bringing CUDA support to LLVM was to make it easier to add support for new programming languages to CUDA, which in this case is exactly what Continuum has used to build their Python CUDA compiler. NVIDIA’s long term goal remains to bring more languages (and thereby more developers) to CUDA, and being able to discuss success stories involving their LLVM compiler is a big part of accomplishing that. | ||||||||||||||||||||||||||
ReadyNAS 100, 300 and 500 Series Reboots Netgear's SMB NAS Lineup Posted: 18 Mar 2013 05:30 AM PDT Netgear got into the SMB / SOHO / consumer NAS market with the purchase of Infrant Technologies in May 2007. The first generation ReadyNAS NV products were based on Infrant's own chips (using a SPARC core). A few years later, Netgear also started producing models based on Intel platforms. These included the Pro, Ultra and Ultra Plus models. Netgear soon realized that the market served by Infrant's old chips (low to mid-range SMB / SOHO / consumer) was being taken over by models based on Marvell's ARM based platforms. To address this, the ReadyNAS Duo v2 and ReadyNAS NV+ v2 were introduced in late 2011. The result was that Netgear had three variants of its RAIDiator OS with different features (one for the SPARC-based Infrant chips, one for x86 and one for ARM). The naming convention for each of the models was also not consumer friendly. Today, Netgear is taking steps to correct these issues with the launch of new models as well as a completely new operating system, the ReadyNAS OS 6. Hardware Refresh: The Marvell-based Duo v2 and NV+ v2 are being replaced with the next-generation ARMADA 370-based ReadyNAS 102 and 104 respectively. The amount of DRAM is also doubled from 256 to 512 MB. Models in the 300 series are based on the Intel Atom D2701 platform, while the 500 series is based on the Intel Core i3-3220. The final digit in each model number refers to the number of bays available. A comparison of the different models is provided in the table below: Some of the interesting hardware features include the addition of an IR receiver in some of the models, as well as a touchscreen in the 500 series. Netgear is also introducing Expansion Disk Array (EDA) units to provide scalability using the eSATA port in the main device. ReadyNAS OS 6.0: In order to maintain a consistent feature set across all models, Netgear has decided to start on a clean slate. Therefore, ReadyNAS OS 6.0 is not going to be made available for any of the earlier models (including the Duo v2 / NV+ v2). The file system has been updated to BTRFS from ext3 / ext4. The usage of BTRFS allows Netgear to provide advanced snapshotting capabilities usually present only in enterprise NAS units. We have already seen the capabilities of ReadyNAS Replicate, a $45 add-on for scheduling secure backups across different NAS units in physically different locations. With ReadyNAS OS 6.0, this feature is included for free. The operating system also includes an antivirus engine which provides real-time protection, and not just scheduled scans. Full iSCSI support for virtualized environments is available (with VMWare and Microsoft certifications). An update to the firmware in late May is scheduled to bring in encryption support to the OS. AES-256 will be used and the key will be stored / required to be available on a USB dongle connected to the NAS. An unfortunate aspect seems to be the fact that none of the models in the 100 / 300 / 500 series have hardware accelerated encryption support. Amongst consumer targeted features, ReadyNAS OS 6.0 supports 'cloud-based' discovery, where the user can just enter the serial number of the NAS unit to be set up online, and Netgear's backend handles the firmware initialization and first-time actions. This has been typically handled by the RAIDar utility (which will also continue to be supported). This ReadyCLOUD feature can not only be used for discovery, but also management and access. Support is also in place for local and remote backup / restore with Time Machine. DLNA is a standard feature in all NAS units now. Netgear claims that they need only a single DLNA server to service both local and remote devices. The ReadyDROP feature provides Dropbox-like real time file synchronization between mobile devices / PCs and a ReadyNAS device. Netgear's Genie Marketplace is also available on the new devices for access to free as well as paid apps which extend the functionality of the device. Pricing and Availability: The ReadyNAS 100 and 300 series are available for purchase today, while the 500 series will make its appearance in the market next month. Both diskless and populated models are available. MSRP for diskless configurations are provided below:
Netgear is also introducing the 4-bay rackmount ReadyNAS 2120 for $1229. More details regarding the internal hardware platform of this model will be made available later. |
You are subscribed to email updates from AnandTech To stop receiving these emails, you may unsubscribe now. | Email delivery powered by Google |
Google Inc., 20 West Kinzie, Chicago IL USA 60610 |
No comments:
Post a Comment