AnandTech Article Channel |
- Hands On With Nokia's New Phones - Lumia 520, Lumia 720, Nokia 105 and Nokia 301
- AMD Announces FirePro R5000; PCoIP For Remote Desktops
- LG Optimus G Pro Hands On & Performance Preview, Snapdragon 600 Tested at MWC 2013
- Samsung Announces SAFE with KNOX - Security Container for Enterprise BYOD
- Live from Nokia's MWC 2013 Press Conference
- Audience Announces eS325 Advanced Voice Processor - es515 Sans Codec
- Intel's Clover Trail+: Dual-Core CPU and Graphics Unveiled at MWC 2013
- NVIDIA Tegra 4 Architecture Deep Dive, Plus Tegra 4i, Icera i500 & Phoenix Hands On
- Huawei Ascend P2 LTE: Hands On
Hands On With Nokia's New Phones - Lumia 520, Lumia 720, Nokia 105 and Nokia 301 Posted: 25 Feb 2013 01:40 AM PST Right after the Nokia press conference I spent time playing around with all of the newly announced Nokia phones, and there are four. Nokia's announcement primarily focused on the entry level devices which target mass market, instead of another flagship model. Starting with the Lumias, I got a chance to dig around in about and confirm that both the 520 and 720 are MSM8227 based, which is dual core Krait and Adreno 203 with the Kraits running at 1.0 GHz. Build quality on both the 520 and 720 was impressive, and they're unmistakably Nokia Lumia phones. I'm pretty impressed with the 720 which seems like a well put together device and also contains an interesting camera. Nokia's rear facing module on the 720 is the first sub F/2.0 optics in a smartphone I'm aware of, at F/1.9, and includes a 6.7 MP CMOS. I took some sample photos with the device but couldn't get them off, what I saw did look impressive however. I got a chance to play with the Nokia 105, their 15 euro device which is aimed at the entry level. The phone is somewhat thick but obviously very well constructed. Next the Nokia 301 has their interesting voice-assisted self shot mode, which gives you prompts to center your face in the field of view. | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
AMD Announces FirePro R5000; PCoIP For Remote Desktops Posted: 25 Feb 2013 01:00 AM PST Though most readers aren’t acutely aware of it, AMD’s FirePro lineup of graphics cards encompasses a number of more specialized fields. On top of their “mainline” cards like the FirePro W series for workstation graphics and compute, AMD also has several other series, including the server graphics/compute/VDI focused S series and the display wall focused W series. Today another FirePro series will be getting a product refresh, the remote desktop (PCoIP) R series, with the launch of the FirePro R5000. Long-time readers will recall that AMD already has a foot in the remote desktop market with the S series, but the S series is primarily geared towards virtualized desktops (VDI). The R series in that respect is a rather specialized piece of hardware from AMD that’s not like anything else in their current (Southern Islands based) product lineup. Unlike the S series, the R series is specifically designed for 1:1 (unvirtualized and unshared) remote desktops through PCoIP technology, and it has additional 3rd party hardware to accomplish this. The hardware in question is a processor from Teradici, principle owners of the PCoIP technology. The PCoIP environment is a so-called “zero client” environment, that rather than being based on lightweight workstations running OSes like Windows embedded, uses a modern approximation to the “dumb terminal” of the olden days. PCoIP zero clients in this regard do almost nothing on their own, instead providing a remote desktop experience provided over a UDP/IP network and hosted entirely on a remote server. Accomplishing this is Teradici’s processors. These provide the necessary hardware to handle encoding the remote desktop video stream along with abstracting PCoIP from the video card, while similar processors on the client side handle the decoding in the case of fully realized zero-desktop devices. AMD fits this by being a provider of the necessary server hardware, as one of the principle methods of driving PCoIP on the server-side is to use graphics cards equipped with host processors and Ethernet ports. The R5000 in turn is one such implementation, pairing up an AMD Pitcairn GPU with a TERA2240 host processor. The FirePro R5000 will be replacing AMD’s previous PCoIP product, the FirePro RG220 family, a pair of video cards introduces a few years back based on AMD’s RV711 processors. In this respect the R5000 is not only significantly more powerful than the multiple-generation old RG220 series, but it’s also aimed far higher up the performance ladder; in this case RV711 was a low-end GPU in AMD’s RV700 series family, while Pitcairn is the middle child in the Southern Islands family. Unfortunately AMD doesn’t provide any hard specifications on the R5000’s graphics performance, but given the similar configuration found on the W5000, it’s not unreasonable to guess that performance should be around W5000’s (1.28TFLOPS). Interestingly, if this assumption is correct, then it’s would mean that R5000’s 150W TDP is almost evenly split between the Pitcairn GPU and the TERA host processor. | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
LG Optimus G Pro Hands On & Performance Preview, Snapdragon 600 Tested at MWC 2013 Posted: 24 Feb 2013 11:42 PM PST We began our day at Mobile World Congress with a visit to LG's booth, where it offered hands on with its 1H 2013 lineup of smartphones. Among them is the new 5.5" Optimus G Pro, the larger brother of last year's Optimus G - the basis for the Nexus 4. The Pro adds a 5.5" 1080p display, driven by Qualcomm's Snapdragon 600 SoC (quad-core Krait 300 running at 1.7GHz paired with an Adreno 320 GPU. Internally there's 2GB of LPDDR2 memory, and storage expansion is provided via a microSD card slot beneath the battery cover. The 3400mAh battery is removable. The Optimus G Pro is mostly made out of plastic but the device felt good in hand. It's pretty impressive what can be done with plastic these days although it seems like metal and/or glass are necessary for the ultra high end device feel. When it comes to ensuring that a device feels rugged, there's no real replacement for plastic however. The Pro features an IR emitter as well as an extendable DMB antenna, the latter should obviously disappear if and when this thing hits the US market. The display itself looked wonderful at the show. The 1080p panel is very sharp, bright and contrasty. Subjectively, colors looked good at the show but we'll have to run it through our suite to get a feel for just how accurate the colors are. The demo units at the show were running Android 4.1.2. They also featured LG's own software customizations, including the ability to view multiple apps on the screen at the same time (QSlide) and set their transparency so you can do things like have a video or calendar visible while reading a web page. The QSlide feature is clearly geared towards multitasking, similar to what Samsung has done on its Galaxy Note line. You launch QSlide enabled apps via the LG customized notifications shade, and then control their transparency using a slider at the top of each QSlide app. While semi-transparent, the app will ignore touch input allowing you to continue to interact with the app behind it. In opaque mode however you can interact with the foreground QSlide app. Although it is possible to open multiple QSlide apps at the same time, you quickly run out of screen real estate. Another neat feature of LG's latest Android build is Dual Recording, which is a camera customization that allows you to simultaneously record from both the front and rear facing cameras. The 13MP primary camera is limited to 720p recording in this mode.
The Optimus G Pro felt extremely quick and responsive during our hands on time at the show. Scrolling and UI performance was all very smooth. Given that we haven't had much experience with Snapdragon 600 and its Krait 300 CPU cores I ran a few tests here at the show to get a feel for what is in store from Qualcomm's performance mainstream quad-core SoC for 2013: Using Chrome, Kraken showed some great performance on the Optimus G Pro. While not quite as fast as Intel's Atom Z2460, it's a big step forward compared to the APQ8064 (Krait 200) based Nexus 4. If this data is representative of the sort of improvement we can expect from Snapdragon 600, I'll be happy. On the graphics side, Adreno 320 is still powering things on the G Pro, although we don't know what clocks the platform at the show was using: Since the Optimus G Pro uses a full HD/1080p display, the on and off-screen results are very similar. Frame rates are low enough in Egypt HD that the lack of vsync in the offscreen tests doesn't have a real impact on performance. Although respectable, I suspect that the 26 fps here is a bit lower than we'd see on production hardware since the 320 should be clocked higher in Snapdragon 600 than in the S4 Pro from last year. | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Samsung Announces SAFE with KNOX - Security Container for Enterprise BYOD Posted: 24 Feb 2013 11:00 PM PST I talked with Anand about his impressions of the Note 8 after reading his hands on piece, and one thing that struck me was a mention of how Samsung was going to aggressively go after the enterprise market in the USA for a few reasons. First, a lot of its marketing has focused on SAFE (SAmsung For Enterprise) which is a combination of improved EAS (Exchange ActiveSync) policies, and improved MDM (Mobile Device Management) integration with more toggles and sliders for IT Admins in enterprise roles. Second, because once you win the enterprise market you're guaranteed some market loyalty and a long tail of sales thanks to the slower pace of enterprise acquisition and certification. I didn't really appreciate the full meaning of just how much Samsung was going after the enterprise business until I learned about their plans for another product geared at enterprise policy enforcement, called KNOX, and Samsung truly wants to be the one who KNOX. KNOX builds on SAFE by basically adding two parts - a fully secure boot chain, and a new container based sandbox for Android. The idea is for Samsung to both become desirable for enterprise businesses, and enable even greater BYOD (Bring Your Own Device) functionality by shipping a single SKU that can easily be attached to an enterprise login and managed. At the same time, the container model means that consumers bringing their devices to a particular business and then leaving won't lose anything other than the container data if they leave and have their devices wiped remotely. The result is a win-win in theory for IT Admins who want more control over the devices being brought into the fray, and employees who don't want to lose personal data in the case of a device wipe, or have privacy concerns from the control IT Admins have over the platform. First is that secure platform story, which begins with secure boot chain which only boots signed code, then SE Android (Security Enhanced Linux for Android), and TrustZone Integrity Monitoring (TIMA). Samsung will have more information about the hardware and software level for KNOX available in a whitepaper later this week. There are some obvious interesting implications to say the least for what this will mean for enthusiast users who want to run their own arbitrary third party ROMs on devices, especially since the secure boot chain will ship enabled in markets targeted for KNOX and on "iconic devices" at the high end to make them BYOD-capable. The second part is the secure, enterprise-controlled container, which exposes itself as an application icon or shortcut in Android, and takes you into another instance of Android which is completely sandboxed or containered from the user's side. Admins then get complete control over the container, including what apps exist inside, all while maintaining the same Android UI and platform. Email, browser, contacts, calendars, and so on exist inside the container sanitized from the personal outside Android. KNOX will include certification for FIPS 140-2 (DAR, DIT), Government Root of Trust, US DOD CAC/PIV, and US DOD Mobile OS SRG on applicable devices. In addition KNOX includes more IT policies for MDM APIs, and ActiveDirectory based management for enterprises who don't have an MDM solution or don't want to use Exchange. The rest of the story is really one of timing and focus. Samsung says it is targeting KNOX heavily at the US market, and obviously compliance with so many federal and government security standards makes that much obvious. Timing wise, KNOX will ship on "iconic devices" in Q2 2013. | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Live from Nokia's MWC 2013 Press Conference Posted: 24 Feb 2013 10:27 PM PST | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Audience Announces eS325 Advanced Voice Processor - es515 Sans Codec Posted: 24 Feb 2013 10:00 PM PST Earlier this year Audience announced the eS515, a combination new-generation voice processor and audio codec. The eS515 brought three-microphone noise suppression along with improved noise suppression for reverb, more wideband audio modes, improved ASR (assisted speech recognition) and multimedia features for recording video or interviews. Today Audience is announcing another member of that family, the eS325, which is essentially an eS515 without the audio codec features, essentially the same voice processor IP block but standalone and discrete for OEMs who choose to implement it. The eS325 is built on a 40nm process, just like the eS515. Previous generations were built on a 65nm process, and there's a resulting reduction in power consumption from the process change of anywhere between 40 to 50 percent. This is a big gain since some of the OEM feedback I've heard is that the tradeoff between Audience and other solutions is that although noise suppresion might be better, there's a power penalty that shows up in call times. The eS325 has a 3x3mm package, and also includes SLIMbus, which is important on Fusion 3 platform devices and forward from Qualcomm (APQ8064+MDM9x15) since there's no longer any I2S on that platform, something which has resulted in a lot of industry change toward SLIMbus. I'm told we will see a number of phones with eS325 this year from the usual suspects (likely Samsung, who has been a regular customer) which will bring the full three-microphone noise suppression feature to bear. In addition there are some new details about just how much performance improves over the previous generation using the three-microphone method, which is anywhere between 4-7 dB.
| ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Intel's Clover Trail+: Dual-Core CPU and Graphics Unveiled at MWC 2013 Posted: 24 Feb 2013 08:00 PM PST We find ourselves at another Mobile World Congress, discussing another Intel Atom based smartphone SoC. While last year we were talking about faster and lower performing versions of the Medfield family, this year there’s a new topic for discussion: Clover Trail+. Here’s where Intel’s codenames start getting a bit confusing. Medfield was the original 32nm Atom SoC for smartphones, while Clover Trail was the 32nm platform for Windows 8 tablets. Clover Trail+ is still a 32nm Atom based SoC but it can be used in phones and tablets. Architecturally, Clover Trail+ takes the Medfield platform, adds a second CPU core (and leaves Hyper Threading enabled), dramatically improves GPU performance and introduces the same memory controller enhancements that were present in the original Clover Trail. Compared to Clover Trail, CT+ mostly adds improved graphics performance. Intel Goes Dual-CoreOn the CPU side we’re still talking about 32nm 2-wide in-order Atom cores, with two threads per core (4 threads total per chip). This is still a direct descendent of the original 45nm Atom core that first debuted in 2008. Intel is quick to point out that the redesigned 22nm architecture is still on track for an introduction (in tablets) this holiday season, but for now we’re still stuck with the old core. That’s not to say that 32nm Atom isn’t competitive. As we’ve already shown, from a performance standpoint the 5 year old Atom architecture is still faster than everything but ARM’s Cortex A15. Qualcomm’s 28nm Krait architecture is lower power, but Intel is at least competitive. Frequencies vary depending on the SKU you’re talking about. Just like with Medfield, there are three CT+ SKUs: the Atom Z2580, Z2560 and Z2520. Compared to the previous generation the second digit has been incremented by one indicating that CT+ is obviously newer/better. All three SKUs feature two cores, but what changes is their max CPU and GPU clocks. the Z2580 tops out at 2.0GHz, just like the Z2460 does today. The Z2560 hits 1.6GHz, and the Z2520 runs at up to 1.2GHz. Both cores are capable of these peak frequencies. The low and nominal operating frequencies are all listed in the table below:
Single threaded CPU performance should remain unchanged from Medfield, but multi/heavily-threaded workloads will see a healthy boost running on Clover Trail+. A Much Faster GPUThe big change is of course the GPU core. While Penwell SoC in Medfield integrated a single PowerVR SGX 545 core, CT+ features two PowerVR SGX 544 cores. The SGX 544 looks a lot like the SGX 543 used in Apple’s A5/A5X, but with 2x the triangle setup performance and DirectX 10 class texturing hardware. There’s no change to the shader ALU count:
As always, Intel is very aggressive on GPU frequencies in CT+. The 544MP2 runs at a max of 300MHz in the case of the Z2520, and can burst up to 533MHz in the case of the Z2580. This max frequency can give the Z2580 similar shader performance to Apple’s A5X, and competitive performance to Apple's iPhone 5/A6 SoC. Intel claims GLBenchmark 2.5 Egypt HD offscreen performance somewhere around 30 fps, putting it near the performance of Qualcomm’s Adreno 320. The CT+ memory subsystem is still a 2 x 32-bit LPDDR2 interface operating, but max data rate moves up to 1066MHz. Putting that sort of GPU power in a mobile SoC is almost unheard of from Intel, and it’s a very good thing to see. Other Improvements & Power ConsumptionPower consumption shouldn’t be any different than Medfield under light CPU loads, however if you keep both cores pegged you’ll obviously burn through your battery quicker. The same is true for the faster GPU. Like many modern smartphone SoCs, the addition of extra cores simply increases the dynamic range of SoC power consumption. Intel does claim that its CT+ FFRD uses less power at idle than Motorola's RAZR i (based on Medfield), although that may be more attributable to the platform itself rather than specifically the SoC. I can see CT+ devices delivering similar battery life, and perhaps even better than Medfield, but I can also see them being a lot worse depending on usage model. Related to the GPU improvements is ultimately the change that gives CT+ its name. Like Clover Trail before it, the CT+ silicon integrates an updated memory controller that’s capable of some form of reordering of memory operations to allow the GPU to better preempt the CPU when it needs the bandwidth. This change wasn’t present in Medfield but made it into Clover Trail, and is also included in CT+. The improved GPU also enables support for higher resolution panels, now up to 1920 x 1200. You can already see the higher res display support at work in Lenovo’s IdeaPhone K900 which features a 5.5” 1080p panel, but Intel told us to expect Android tablets based on CT+ as well. Given Intel’s relative absence from the Android tablet space, it’s clear that this is a big deal. Video encode/decode hardware is mostly unchanged, as are the platform’s capabilities of 1080p30 encode and decode. Intel does claim video decode performance is improved, partially due to the enhancements on the memory interface side. The Silicon Hive (now owned by Intel) ISP is physically the same as what was in Medfield, but features largely re-written firmware to improve performance and add functionality. Intel improved HDR processing performance with the fw re-write, which is supposed to reduce ghosting and motion blur when shooting in HDR mode. The new firmware also enables support for a time shift mode where you can take multiple captures and select faces from individual frames to assemble a multi-person photo where everyone is smiling/not-blinking. CT+ still supports up to a 16MP primary camera and up to a 2MP secondary camera, with an 8MP limit for burst mode. The Baseband Story: Finally XMM 6360On the baseband side, Intel is finally updating its very old XMM 6260 silicon to XMM 6360. The update brings 3GPP release 9 support, with 42Mbps DC-HSPA+ (Category 24) support, as well as HSUPA category 7 (11.5 Mbps). The XMM 6360 still ships with a pentaband transceiver. While this is still shy of enabling LTE, for many of the world markets where dual-carrier support is important this is a huge step forward for Intel in the baseband department - finally bringing the company up to our baseline expectations. Intel remains behind Qualcomm when it comes to baseband silicon and it took an absurd amount of time to move off of 6260 for its high end solution. I’m not entirely sure what’s taking so long with modernizing Intel’s modem roadmap but it’s been pretty embarrassing thus far. The good news is we’re supposed to see Intel’s first LTE solution by the end of the year. XMM 6360 is built on a 40nm process, ironically enough not at Intel’s own fabs - a remnant of the old Infineon business that Intel acquired a few years back. An Updated Reference DesignIntel built a new FFRD (Form Factor Reference Design) around CT+, with some obviously updated specs. The new FFRD includes an Atom Z2580 with the new XMM 6360 baseband silicon. Internal memory gets an upgrade from 1GB of LPDDR2-800 to 2GB of LPDDR2-1066. The maximum amount of NAND supported moves to 256GB, although I’d be very surprised to see that deployed in a shipping device. The rear facing camera module in Intel’s CT+ FFRD moves up to 16MP, while the front facing module gets an upgrade to 2MP.
Finally, the CT+ FFRD ships with Android 4.2. As always, Intel’s partners are welcome to ship a skinned version of the FFRD if they would like, or they can choose to implement individual components. Final WordsClover Trail+ is an important step forward for Intel as it finally shows real progress in the company prioritizing GPU performance. We thought Medfield was a completely acceptable smartphone SoC platform, and was actually very well implemented in Motorola's RAZR i, so it's likely that we'll be similarly pleased with CT+. Where Intel needs to deliver however is on two fronts. For starters, it needs to bring 22nm silicon to market so we can really see what Intel can do on the power/performance front rather than continuing to have this discussion about Intel simply being within the right range of competition. Secondly, and equally important, is the fact that Intel simply needs more high-profile design wins. Motorola's RAZR i and Lenovo's IdeaPhone K900 both seem like good attempts, but to really begin to pull from Qualcomm it needs Nexus 4, Galaxy S and HTC One class design wins. That, I suspect, is a much more difficult thing to pull off - at least until 22nm silicon rolls around. | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
NVIDIA Tegra 4 Architecture Deep Dive, Plus Tegra 4i, Icera i500 & Phoenix Hands On Posted: 24 Feb 2013 11:00 AM PST Ever since NVIDIA arrived on the SoC scene, it has done a great job of introducing its ultra mobile SoCs. Tegra 2 and 3 were both introduced with a healthy amount of detail and the sort of collateral we expect to see from any PC silicon vendor. While the rest of the mobile space is slowly playing catchup, NVIDIA continued the trend with its Tegra 4 and Tegra 4i architecture disclosure. Since Tegra 4i is a bit further out, much of NVIDIA’s focus for today’s disclosure focused on its flagship Tegra 4 SoC due to begin shipping in Q2 of this year along with the NVIDIA i500 baseband. At a high level you’re looking at a quad-core ARM Cortex A15 (plus fifth A15 companion core) and a 72-core GeForce GPU. To understand Tegra 4 at a lower level, we’ll dive into the individual blocks beginning, as usual with the CPU. | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Huawei Ascend P2 LTE: Hands On Posted: 24 Feb 2013 04:59 AM PST We just stopped by Huawei's press event here in Barcelona just before Mobile World Congress 2013 kicks off, and Huawei had an interesting surprise lined up. Huawei put the Ascend P2 LTE out on display before the event actually kicked off, and we seized the opportunity to play around with it and get some impressions. The P2 LTE is based around Huawei's own silicon, the familiar K3V2 SoC which consists of four ARM Cortex A9s at 1.5 GHz. There's 1 GB of RAM onboard, and a 1280x720 display on the front which looks like it's 4.7-inches in size (Update: 4.7-inches and IPS LCD). In addition, the P2 includes LTE and the rest of the 3GPP suite of connectivity (WCDMA/GSM). I'm told this is also courtesy Huawei's own silicon, for baseband that's the Balong 7xx series part which is UE Category 4 (150 Mbps down). The P2 is also pentaband WCDMA, and I'm told will support all the major LTE bands. The P2 also includes a 2420 mAh battery. The P2 looks very close in style, shape, and industrial design to the P1 and is its clear successor. The back of the P2 isn't a smooth plastic anymore but more of a rubberized textured material, making it easier to grip. There's a dedicated camera button, and capacitive home, back, and menu at the bottom. The P2 also includes a 13 MP camera. Gallery: Huawei Ascend P2 LTE: Hands On Source: Huawei |
You are subscribed to email updates from AnandTech To stop receiving these emails, you may unsubscribe now. | Email delivery powered by Google |
Google Inc., 20 West Kinzie, Chicago IL USA 60610 |
No comments:
Post a Comment