FREE DOWNLOAD AUTO CAR
MORE INFO ABOUT AUTO CAR

Wednesday, March 20, 2013

AnandTech Article Channel

AnandTech Article Channel


Logitech's G Series Branding and Product Refresh

Posted: 20 Mar 2013 01:00 AM PDT

Logitech has been producing peripherals for some time now, but what they've lacked is a concrete "this is for enthusiasts" brand identity. Ordinarily a vendor producing a specific "gaming" brand is met with eyerolls and rightfully so, but Logitech's gaming peripherals have sort of floated in their expansive product lineup with only the "G" prefix to really distinguish them. What we're looking at today is a push towards a very concrete, very distinct branding that will make Logitech's gamer oriented products much more readily identifiable.

As far as the refresh itself goes, we'll start with the quartet of mice being released. These are essentially refreshes of many of their existing mice with new skins and more importantly, newer and better hardware under the hood. The new versions all see an "s" suffix added to the model numbers, but with them we get better switches and sensors across the board.


Clockwise from the top left: G100s, G400s, G700s, G500s.

Other than the upgraded internals, Logitech deliberately hewed very closely to the existing designs in terms of both materials and feel. Their attitude was "If it ain't broke, don't break it," and while my first inclination might be to chide them on being lazy, the reality is that I agree. These mice (especially the G500) were nigh perfect on their initial release, so there's little reason to mess with success. I will note that I'm not a huge fan of the new visual design, though. MSRP for these mice will be $39 for the G100s, $59 for the G400s, $69 for the G500s, and $99 for the G700s wired/wireless combo mouse.

Next on the agenda are Logitech's new keyboards, but I have a slightly harder time getting excited about these.


The Logitech G19s (top) and G510s (bottom).

These keyboards are straight up new products. Both feature completely color-configurable full-keyboard backlighting and Logitech's GamePanel LCDs. The GamePanels have apparently been pretty popular on their existing keyboards, and Logitech isn't really messing with success with these new keyboards. Instead they've improved durability by using hydrophobic coatings on the palm rest while double-coating the keys for improved longevity. That said, Logitech went with the RGB color-configurable backlighting as opposed to mechanical key switches, so these are still membrane keyboards. If you're like me, that's a bit of a disappointment.

The G19s has a full color LCD GamePanel and actually has an external power brick that allows it to use a single USB 2.0 connection while offering two powered USB 2.0 ports, the backlighting, and the panel. MSRP is set for $199.

The G510s is only slightly cut down; instead of the powered USB 2.0 ports, you get integrated USB audio that toggles on when you plug headphones and a microphone into it. I'm actually pretty keen on that as opposed to using a passthrough, as it makes Windows' clunky audio switching more tolerable. MSRP is set for $119.

Finally, Logitech is releasing two new headsets, both of which I found surprisingly comfortable. Finding a good gaming headset can be difficult for people who wear glasses (or even over-ear headphones in general), but the grip of the new headsets, the G430 and G230, was remarkably gentle while still being secure. Both headsets feature a noise-cancelling microphone. The more expensive G430 (at $79) sports 7.1 surround sound and includes a removable USB audio dongle, meaning you can opt to use it as a basic pair of headphones if you're so inclined. Meanwhile, the more affordable G230 (at $59) foregoes these accoutrements, instead offering basic stereo sound.

Common to all of these products, Logitech is unifying device drivers under one piece of software (something some of their competitors still lack), all but the G430 are Mac compatible (though there's no reason you can't remove the USB dongle and use the G430 as a basic headset on a Mac.) Availability is scheduled for the beginning of April 2013 in the United States, and May 2013 in Europe.

More Details On NVIDIA’s Kayla: A Dev Platform for CUDA on ARM

Posted: 19 Mar 2013 05:00 PM PDT

In this morning’s GTC 2013 keynote, one of the items briefly mentioned by NVIDIA CEO Jen-Hsun Huang was Kayla, an NVIDIA project combining a Tegra 3 processor and an unnamed GPU on a mini-ITX like board. While NVIDIA is still withholding some of the details of Kayla, we finally have some more details on just what Kayla is for.

The long and short of matters is that Kayla will be an early development platform for running CUDA on ARM. NVIDIA’s first CUDA-capable ARM SoC will not arrive until 2014 with Logan, but NVIDIA wants to get developers started early. By creating a separate development platform this will give interested developers a chance to take an early look at CUDA on ARM in preparation for Logan and other NVIDIA products using ARM CPUs, and start developing their wares now.

As it stands Kayla is a platform whose specifications are set by NVIDIA, with ARM PC providers building the final systems. The CPU is a Tegra 3 processor – picked for its PCI-Express bus needed to attach a dGPU – while the GPU is a Kepler family GPU that NVIDIA is declining to name at this time. Given the goals of the platform and NVIDIA’s refusal to name the GPU, we suspect this is a new ultra low end 1 SMX (192 CUDA core) Kepler GPU, but this is merely speculation on our part. There will be 2GB of RAM for the Tegra 3, while the GPU will come with a further 1GB for itself.

Update: PCGamesHardware has a picture of a slide from a GTC session listing Kayla's GPU as having 2 SMXes. It's definitely not GK107, so perhaps a GK107 refresh?

The Kayla board being displayed today is one configuration, utilizing an MXM slot to attach the dGPU to the platform. Other vendors will be going with PCIe, using mini-ITX boards. The platform on the whole is in the 10s of watts - but of course NVIDIA is quick to point out that Logan itself will be an order of magnitude less, thanks in part to the advantages conferred by being an SoC.

NVIDIA was quick to note that Kayla is a development platform for ARM on CUDA as opposed to calling it a development platform for Logan; though at the same it unquestionably serves as a sneak-peak for Logan. This is in big part due to the fact that the CPU will not match what’s on Logan – Tegra 4 already is beyond Tegra 3 with its A15 CPU cores – and it’s unlikely that the GPU is an exact match either. Hence the focus on early developers, who are going to be more interested in making it work than the specific performance the platform provides.

It’s interesting to note that NVIDIA is not only touting Kayla’s CUDA capabilities, but also the platform’s OpenGL 4.3 capabilities. Because Kayla and Logan are Kepler based, the GPU will be well ahead of OpenGL ES 3.0 with regards to functionality. Tessellation, compute shaders, and geometry shaders are present in OpenGL 4.3, among other things, reflecting the fact that OpenGL ES is a far more limited API than full OpenGL. This means that NVIDIA is shooting right past OpenGL ES 3.0, going from OpenGL ES 2.0 with Tegra 4 to OpenGL 4.3 with Logan/Kayla. This may also mean NVIDIA intends to use OpenGL 4.3 as a competitive advantage with Logan, attracting developers and users looking for a more feature-filled SoC than what current OpenGL ES 3.0 SoCs are slated to provide.

Wrapping things up, Kayla will be made available in the spring of this year. NVIDIA isn’t releasing any further details on the platform, but interested developers can go sign up to receive updates over at NVIDIA’s Developer Zone webpage.

On a lighter note, for anyone playing NVIDIA codename bingo, we’ve figured out why the platform is called Kayla. Jen-Hsun called Kayla “Logan’s girlfriend”, and it turns out he was being literal. So in keeping with their SoC naming this is another superhero-related name.

NVIDIA Updates GPU Roadmap; Announces Volta Family For Beyond 2014

Posted: 19 Mar 2013 04:15 PM PDT

As we covered briefly in our live blog of this morning’s keynote, NVIDIA has publically updated their roadmap with the announcement of the GPU family that will follow 2014’s Maxwell family. That new family is Volta, named after Alessandro Volta, the physicist credited with the invention of the battery.

At this point we know very little about Volta other than a name and one of its marque features, but with how NVIDIA operates that’s consistent with how they’ve done things before. NVIDIA has for the last couple of years operated on an N+2 schedule for their public GPU roadmap, so with the launch of Kepler behind them we had been expecting a formal announcement of what was to follow Maxwell.

In any case, Volta’s marque feature will be stacked DRAM, which sees DRAM placed very close to the GPU by placing it on the same package, and connected to the GPU using through-silicon vias (TSVs). Having high bandwidth, on-package RAM is not new technology, but it is still relatively exotic. In the GPU world the most notable shipping product using it would be the PS Vita, which has 128MB of RAM in a wide-IO (but not TSV) manner. Meanwhile NVIDIA competitor Intel will be using a form of embedded DRAM for their highest-performance GT3e iGPU for their forthcoming Haswell generation CPUs.

The advantage of stacked DRAM for a GPU is that its locality brings with it both bandwidth and latency benefits. In terms of bandwidth the memory bus can be both faster and wider than an external memory bus, depending on how it’s configured. Specifically the close location of the DRAM to the GPU makes it practical to run a wide bus, while the short traces can allow for higher clockspeeds. Meanwhile the proximity of the two devices means that latency should be a bit lower – a lot of the latency is in the RAM fetching the required cells, but at the clockspeeds GDDR5 already operates at the memory buses on a GPU are relatively long, so there are some savings to be gained.

NVIDIA is targeting a 1TB/sec bandwidth rate for Volta, which to put things in perspective is over 3x what GeForce GTX Titan currently achieves with its 384bit, 6Gbps/pin memory bus (288GB/sec). This would imply that Volta is shooting for something along the lines of a 1024bit bus operating at 8Gbps/pin, or possibly an even larger 2048bit bus operating at 4Gbps/pin. Volta s still years off, but this at least gives us an idea of what NVIDIA needs to achieve to hit their 1TB/sec target.

What will be interesting to see is how NVIDIA handles the capacity issues brought on by on-chip RAM. It’s no secret that DRAM is rather big, and especially so for GDDR. Moving all of that RAM on-chip seems unlikely, especially when consumer video cards are already pushing 6GB (Titan). For high-end GPUs this may mean NVIDIA is looking at a split RAM configuration, with the on-chip RAM acting as a cache or small pool of shared memory, while a much larger pool of slower memory is attached via an external bus.

At this point Volta does not have a date attached to it, which is unlike Maxwell which originally had a 2013 date attached to it when first named. That date of course slipped to 2014, and while it’s never been made clear why, the fact that Kepler slipped from 2011 to 2012 is a reminder that NVIDIA is still tied to TSMC’s production schedule due to their preference to launch new architectures on new nodes. Volta in turn will have some desired node attached to its development, but we don’t know what at this time.

With TSMC shaking up its schedule in an attempt to catch up to Intel on both nodes and technology, the lack of a date ultimately is not surprising since it’s difficult at best to predict when the appropriate node will be ready 3 years out. On that note it’s interesting to note that while NVIDIA has specifically mentioned FinFET transistors will be used on their Parker SoC, they have not mentioned FinFET for Volta. Coming from their investor meeting the question came up, and while it wasn’t specifically denied we were also left with no reason to expect Volta to be using FinFET, so make of that what you will.

Meanwhile, in NVIDIA tradition they’ve also thrown out a very rough estimate of Volta’s performance by plotting their GPUs against a chart of FP64 performance per watt. Today Kepler is already at roughly 5.5 GFLOPS/watt for K20X, while Volta is plotted at 24ish. Like the rest of the GPU industry NVIDIA remains to be power constrained, so at equal TDPs we’d expect roughly four times the performance of K20X, which would put total FP64 performance at around 5 TFLOPS. But again, all of this is early into a GPU that will not be released for years.

Finally, while Volta is capturing the majority of the press due to the fact that it’s the newest GPU coming out of NVIDIA, this latest roadmap does also offer a bit more on Maxwell. Maxwell’s marque feature as it turns out is unified virtual memory. CUDA already has a unified virtual address space available, so this would seemingly go beyond that. In practice such a technology is important for devices integrating a GPU and a CPU onto the same package, which is what the AMD-led Heterogeneous System Architecture seeks to exploit. For NVIDIA their Parker SoC will be based on Maxwell for the GPU and Denver for the CPU, so this looks to be a feature specifically setup for Parker and Parker-like products, where NVIDIA can offer their own CPU integrated with a Maxwell GPU.

Piz Daint Supercomputer Announced, Powered By Tesla K20X

Posted: 19 Mar 2013 11:00 AM PDT

Along with NVIDIA’s keynote this morning (which should be wrapping up by the time this article goes live), NVIDIA also has a couple other announcements that are hitting the wire this morning. The first of which is the announcement of NVIDIA landing another major supercomputer contract, this time with the Swiss National Computing Center (CSCS).

CSCS will be building a new Cray XC30 supercomputer, “Piz Daint.” Like Titan last year, Piz Daint is a mixed supercomputer that will pack both a large number of CPUs – Xeon E5s to be precise – and of course to great interest to NVIDIA, a large number of Tesla K20X GPUs. We don’t have the complete specifications for Piz Daint at this time, but when completed it is expected to exceed 1 PFLOPS in performance and be the most powerful supercomputer in Europe.

Piz Daint will be filling in several different roles at CSCS. Its primary role will be weather and climate modeling, working with Switzerland’s national weather service MeteoSwiss. Along with weather work, CSCS will also be using time on Piz Daint for other science fields, including astrophysics, life science, and material science.

For NVIDIA of course this marks another big supercomputer win for the company. Though not a huge business on its own at this time relative to the complete Tesla business, wins like Titan and Piz Daint are prestigious for the company due the importance of the work done on these supercomputers and the name recognition they bring.

 

NVIDIA Updates Tegra Roadmap Details at GTC - Logan and Parker Detailed

Posted: 19 Mar 2013 10:50 AM PDT

We're at NVIDIA's GTC 2013 event where team green just updated their official roadmap and shared some more details about their Tegra portfolio, specifically additional information about Logan and Parker, the codename for the SoCs beyond Tegra 4. First up is Logan, which will be NVIDIA's first SoC with CUDA inside, specifically courtesy a Kepler architecture GPU capable of CUDA 5.0 and OpenGL 4.3. There's no details on the CPU side of things, but we're told to expect Logan demos (and samples) inside 2013 and production inside devices early 2014. 

It’s interesting to note that with the move to a Kepler architecture GPU, Logan will be taking on a vastly increased graphics feature set relative to Tegra 4. With Kepler comes OpenGL 4.3 capabilities, meaning that NVIDIA is not just catching up to OpenGL ES 3.0, but shooting right past it. Tessellation, compute shaders, and geometry shaders among other things are all present in OpenGL 4.3, far exeeding the much more limited and specialized OpenGL ES 3.0 feature set. Given the promotion that NVIDIA is putting into this - they've been making it quite clear t everyone that Logan will be OpenGL 4.3 capable - this may mean that NVIDIA intends to use OpenGL 4.3 as a competitive advantage with Logan, attracting developers and users looking for a more feature-filled SoC than what current OpenGL ES 3.0 SoCs are slated to provide.

On a final note about Logan, it’s interesting to note that Kepler has a fairly strict shader block granularity of 1 SMX, i.e. 192 CUDA cores. While NVIDIA can always redefine Kepler to mean what they say it means, if they do stick to that granularity then it should give us a very narrow range of possible GPU configurations for Logan.

After Logan is Parker, which NVIDIA shared will contain the codename Denver CPU NVIDIA is working on, with 64 bit capabilities and codename Maxwell GPU. Parker will also be built using 3D FinFET transistors, likely from TSMC.

Like Logan, it's clear that Parker will be benefitting from being based on a recent NVIDIA dGPU. While we don't know a great deal about Maxwell since it doesn't launch for roughly another year, NVIDIA has told us that Maxwell will support unified virtual memory. With Logan NVIDIA gains CUDA capabilities due to Kepler, but with Parker they are laying down the groundwork for full-on heterogeneous computing in a vein similar to what AMD and the ARM partners are doing with HSA. NVIDIA has so far not talked about heterogeneous computing in great detail since they only provide GPUs and limited functionality SoCs, but with Denver giving them an in-house CPU to pair with their in-house GPUs, products like Parker will be giving them new options to explore. And perhaps more meaningfully, the means to counter HSA-enabled ARM SoCs from rival firms.

In addition NVIDIA showed off a new product named Kayla which is a small, mITX-like board running a Tegra 3 SoC and unnamed new low power Kepler family GPU. 

NVIDIA's GPU Technology Conference 2013 Keynote Live Blog

Posted: 19 Mar 2013 09:04 AM PDT

We're live at NVIDIA's 2013 GPU Technology Conference (GTC) press conference, seated and ready to go. Anand, Ryan, and myself are here and expecting Jen-Hsun's keynote to get under way shortly. 

Cooler Master Storm Scout II Advanced Case Review: Falling Behind the Curve

Posted: 19 Mar 2013 09:01 AM PDT

Cooler Master has been fairly gung ho on the PR side about their Storm Scout II Advanced. While we missed the opportunity to review its predecessor, the Storm Scout II, we aim to rectify that omission by putting this new semi-portable ATX chassis through its paces. Cooler Master has a long history of strong enthusiast offerings (with their HAF line being particularly well loved), but does the Storm Scout II Advanced inherit that legacy of greatness or are they falling behind the curve?

No comments:

Post a Comment