Selasa, 23 April 2019

NVIDIA Launches GeForce GTX 1650: Budget Turing For $149, Available Today - AnandTech

A bit over 8 months after it all began, the tail-end of NVIDIA’s GeForce Turing product stack launch is finally in sight. This morning the company is rolling out its latest and cheapest GeForce Turing video card, the GeForce GTX 1650. Coming in at $149, the newest member of the GeForce family is set to bring up the rear of the GeForce product stack, offering NVIDIA’s latest architecture in a low-power, 1080p-with-compromises gaming video card with a budget-friendly price to match.

In very traditional NVIDIA fashion, the Turing launch has been a top-to-bottom affair. After launching the four RTX 20 series cards early in the cycle, NVIDIA’s efforts in the last two months have been focused on filling in the back end of their product stack. Central to this is a design variant of NVIDIA’s GPUs, the TU11x series – what I’ve been dubbing Turing Minor – which are intended to be smaller, easier to produce chips that retain the all-important core Turing architecture, but do away with the dedicated ray tracing (RT) cores and the AI-focused tensor cores as well. The end result of this bifurcation has been the GeForce GTX 16 series, which is designed to be a leaner and meaner set of Turing GeForce cards.

To date the GTX 16 series has been comprised of solely the GTX 1660 family of cards – the GTX 1660 (vanilla) and GTX 1660 Ti. Both of these were based on the TU116 GPU. However today the GTX 16 series family is expanding, with the introduction of the GTX 1650 and the new Turing GPU powering NVIDIA’s junior-sized card: TU117.


Unofficial TU117 Block Diagram

Whereas the GeForce GTX 1660 Ti and the underlying TU116 GPU served as our first glimpse at NVIDIA’s mainstream product plans, the GeForce GTX 1650 is a much more pedestrian affair. The TU117 GPU beneath it is for all practical purposes a smaller version of the TU116 GPU, retaining the same core Turing feature set, but with fewer resources all around. Altogether, coming from the TU116 NVIDIA has shaved off one-third of the CUDA cores, one-third of the memory channels, and one-third of the ROPs, leaving a GPU that’s smaller and easier to manufacture for this low-margin market. Still, at 200mm2 in size and housing 4.7B transistors, TU117 is by no means a simple chip. In fact, it’s exactly the same die size as GP106 – the GPU at the heart of the GeForce GTX 1060 series – so that should give you an idea of how performance and transistor counts have (slowly) cascaded down to cheaper products over the last few years.

At any rate, TU117 will be going into numerous NVIDIA products over time. But for now, things start with the GeForce GTX 1650.

NVIDIA GeForce Specification Comparison
  GTX 1650 GTX 1660 GTX 1050 Ti GTX 1050
CUDA Cores 896 1408 768 640
ROPs 32 48 32 32
Core Clock 1485MHz 1530MHz 1290MHz 1354MHz
Boost Clock 1665MHz 1785MHz 1392MHz 1455MHz
Memory Clock 8Gbps GDDR5 8Gbps GDDR5 7Gbps GDDR5 7Gbps GDDR5
Memory Bus Width 128-bit 192-bit 128-bit 128-bit
VRAM 4GB 6GB 4GB 2GB
Single Precision Perf. 3 TFLOPS 5 TFLOPS 2.1 TFLOPS 1.9 TFLOPS
TDP 75W 120W 75W 75W
GPU TU117
(200 mm2)
TU116
(284 mm2)
GP107
(132 mm2)
GP107
(132 mm2)
Transistor Count 4.7B 6.6B 3.3B 3.3B
Architecture Turing Turing Pascal Pascal
Manufacturing Process TSMC 12nm "FFN" TSMC 12nm "FFN" Samsung 14nm Samsung 14nm
Launch Date 4/23/2019 3/14/2019 10/25/2016 10/25/2016
Launch Price $149 $219 $139 $109

Right off the bat, it’s interesting to note that the GTX 1650 is not using a fully-enabled TU117 GPU. Relative to the full chip, the version that’s going into the GTX 1650 has had a TPC fused off, which means the chip loses 2 SMs/64 CUDA cores. The net result is that the GTX 1650 is a very rare case where NVIDIA doesn’t put their best foot forward right off the bat – the company is essentially sandbagging – which is a point I’ll loop back around to here in a bit.

Within NVIDIA’s historical product stack, it’s somewhat difficult to place the GTX 1650. Officially it’s the successor to the GTX 1050, which itself was a similar cut-down card. However the GTX 1050 also launched at $109, whereas the GTX 1650 launches at $149, a hefty 37% generation-over-generation price increase. Consequently, you could be excused if you thought the GTX 1650 felt a lot more like the GTX 1050 Ti’s successor, as the $149 price tag is very comparable to the GTX 1050 Ti’s $139 launch price. Either way, generation-over-generation, Turing cards have been more expensive than the Pascal cards they have replaced, and the low price of these budget cards really amplifies this difference.

Diving into the numbers then, the GTX 1650 ships with 896 CUDA cores enabled, spread over 2 GPCs. This is actually not all that big of a step up from the GeForce GTX 1050 series on paper, but Turing’s architectural changes and effective increase in graphics efficiency mean that the little card should pack a bit more of a punch than it first shows on paper. The CUDA cores themselves are clocked a bit lower than usual for a Turing card, however, with the reference-clocked GTX 1650 boosting to just 1665MHz.

Rounding out the package is 32 ROPs, which are part of the card’s 4 ROP/L2/Memory clusters. This means the card is being fed by a 128-bit memory bus, which NVIDIA has paired up with GDDR5 memory clocked at 8Gbps. Conveniently enough, this gives the card 128GB/sec of memory bandwidth, which is about 14% more than the last-generation GTX 1050 series cards got. Thankfully, while NVIDIA hasn’t done much to boost memory capacities on the other Turing cards, the same is not true for the GTX 1650: the minimum here is now 4GB, instead of the very constrained 2GB found on the GTX 1050. Not that 4GB is particularly spacious in 2019, however the card shouldn’t be quite so desperate for memory as its predecessor was.

Overall, on paper the GTX 1650 is set to deliver around 60% of the performance of the next card up in NVIDIA’s product stack, the GTX 1660. In practice I expect the two to be a little closer than that – GPU performance scaling isn’t quite 1-to-1 – but that is the ballpark area we’re looking at right now until we can actually test the card.

Meanwhile when it comes to power consumption, the smallest member of the GeForce Turing stack is also the lowest power. NVIDIA has held their GTX xx50 cards at 75W (or less) for a few generations now, and the GTX 1650 continues this trend. Which means that, at least for cards operating at NVIDIA’s reference clocks, an additional PCIe power connector is not necessary and the card can be powered solely off of the PCIe bus. This satisfies the need for a card that can be put in basic systems where a PCIe power cable isn’t available, or in low-power systems where a more power-hungry card isn’t appropriate. This also means that while discrete video cards aren’t quite as popular as they once were for HTPCs, for HTPC builders who are looking to go that route, the GTX 1650 is going to be the GTX 1050 series’ replacement in that market as well.

Reviews, Product Positioning, & The Competition

Shifting gears to business matters, let’s talk about product positioning and hardware availability.

The GeForce GTX 1650 is a hard launch for NVIDIA; that means that cards are shipping from retailers and in OEM systems starting today. Typical for low-end NVIDIA cards, there are no reference cards or reference designs to speak of, so NVIDIA’s board partners will be doing their own thing with their respective product lines. Notably, these will include factory overclocked cards that offer more performance, but also which will require an external PCIe power connector in order to meet the cards' greater energy needs.

Despite this being a hard launch however, in a very unorthodox (if not outright underhanded) move, NVIDIA has opted not to allow the press to test GTX 1650 cards ahead of time. Specifically, NVIDIA has withheld the driver necessary to test the card, which means that even if we had been able to secure a card in advance, we wouldn’t have been able to run it. We do have cards on the way and we’ll be putting together a review in due time, but for the moment we have no more hands-on experience with GTX 1650 cards than you, our readers, do.

NVIDIA has always treated low-end card launches as a lower-key affair than their high-end wares, and the GTX 1650 is no different. In fact this generation’s launch is particularly low-key: we have no pictures or even a press deck to work with, as NVIDIA opted to inform us of the card over email. And while there’s little need for extensive fanfare at this point – it’s a Turing card, and the Turing architecture/feature set has been covered to excess at this point – it’s rare that a card based on a new GPU launches without reviewers getting an early crack at it. And that’s for a good reason: reviewers offer neutral, third-party analysis of the card and its performance. So it’s generally not in buyers’ best interests to cut out reviewers – and when it is this can raise some red flags – but none the less here we are.

At any rate, while I’d suggest that buyers hold off for a week or so for reviews to be put together, Turing at this point is admittedly a known quantity. As we mentioned earlier the on-paper specifications put the GTX 1650 at around 60% of the GTX 1660’s performance, and real-world performance will probably be a bit higher. NVIDIA for their part is primarily pitching the card as an upgrade for the GeForce GTX 950 and its same-generation AMD counterparts, and this has been the same upgrade cadence gap we’ve seen throughout the rest of the GeForce Turing family. NVIDIA is saying that performance should be 2x (or more) faster than the GTX 950, and this is something that should be easily achieved.

While we’re waiting to get our hands on a card to run benchmarks, broadly speaking the GTX xx50 series of cards are meant to be 1080p-with-compromises cards, and I’m expecting much the same for the GTX 1650 based on what we saw with the GTX 1660. The GTX 1650 should be able to run some games at 1080p at maximum image quality – think DOTA2 and the like – but in more demanding games I expect it to have to drop back on some settings to stay at 1080p with playable framerates. One advantage that it does have here, however, is that with its 4GB of VRAM, it shouldn’t struggle nearly as much on more recent games as the 2GB GTX 950 and GTX 1050 do.

Strangely enough, NVIDIA is also offering a game bundle (of sorts) with the GTX 1650. Or rather, the company has extended their ongoing Fortnite bundle to cover the new card, along with the rest of the GeForce GTX 16 lineup. The bundle itself isn’t much to write home about – some game currency and skins for a game that’s free to begin with – but it’s an unexpected move since NVIDIA wasn’t offering this bundle on the other GTX 16 series cards when they launched.

Meanwhile, looking at the specs of the GTX 1650 and how NVIDIA has opted to price the card, it’s clear that NVIDIA is holding back a bit. Normally the company launches two low-end cards at the same time – a card based on a fully-enabled GPU and a cut-down card – which they haven’t done this time. This means that NVIDiA is sitting on the option of rolling out a fully-enabled TU117 card in the future if they want to. And while the actual CUDA core count differences between GTX 1650 and a theoretical GTX 1650 Ti are quite limited, to the point where a few more CUDA cores alone would probably not be worth it, NVIDIA also has another ace up its sleeve in the form of GDDR6 memory. If the conceptually similar GTX 1660 Ti is anything to go by, a fully-enabled TU117 card with a small bump in clockspeeds and 4GB of GDDR6 could probably pull far enough ahead of the vanilla GTX 1650 to justify a new card, perhaps at $179 or so to fill NVIDIA’s current product stack gap.

Finally, as for the competition, AMD of course is riding out the tail-end of the Polaris-based Radeon RX 500 series, so this is what the GTX 1650 will be up against. AMD is trying very hard to setup the Radeon RX 570 8GB against the GTX 1650, which makes for a very interesting battle. Based on what we saw with the GTX 1660, the RX 570 should perform rather well versus the GTX 1650, and the 8GB of VRAM would be the icing on the cake. However I’m not sure AMD and its partners can necessarily hold 8GB card prices to $149 or less, in which case the competition may end up being the 4GB RX 570 instead.

Ultimately AMD’s position is going to be that while they can’t match the GTX 1650 on features or power efficiency – and bear in mind that the RX 570 is rated to draw almost twice as much power here – they can match it on pricing and beat it on performance. Which as long as AMD wants to hold the line there, this is going to be a favorable matchup for AMD on a pure price/performance basis for current-generation games. Though to see how favorable it might be, we’ll of course need to benchmark the GTX 1650, so be sure to stay tuned for that.

Q2 2019 GPU Pricing Comparison
AMD Price NVIDIA
  $349 GeForce RTX 2060
Radeon RX Vega 56 $279 GeForce GTX 1660 Ti
Radeon RX 590 $219 GeForce GTX 1660
Radeon RX 580 (8GB) $189 GeForce GTX 1060 3GB
(1152 cores)
Radeon RX 570 $149 GeForce GTX 1650

Let's block ads! (Why?)


https://www.anandtech.com/show/14255/nvidia-launches-geforce-gtx-1650-budget-turing-for-149-available-today

2019-04-23 14:00:00Z
52780275222677

Intel 9th Gen Core Processors: All the Desktop and Mobile 45W CPUs Announced - AnandTech

How Intel has launched its range of ninth generation processors has been an odd and awkward one. Despite the launch of the 8-core 9900K last year, we still haven’t had all of the lower end processors in the family actually be announced, despite several leaks in the meantime. That all changes today as the company is giving out a full list of processors, with availability soon to follow. There are still question marks about Intel’s ability to meet the new increased demand, so it will be interesting to see if Intel can still provide the lower frequency and lower core count hardware in volume.

Today’s launch comes in two parts: Desktop and Mobile. Desktop is on this page, Mobile is on the next page.

Intel 9th Generation Core Desktop Processors: 34 CPUs

Dubbed ‘Coffee Lake Refresh’, the 9th generation of Intel’s Core CPU product line is a direct refresh of its 8th generation Coffee Lake hardware, with minor enhancements such as a better thermal interface on the high end processors, support for up to 8 cores, and newer chipsets with integrated USB 3.1 Gen2 (10Gbps) and CNVi-enabled Wi-Fi. The hardware is still fundamentally the original 6th Gen Skylake microarchitecture underneath, from 2016, but built on Intel’s latest 14nm process variant, in order to extract additional frequency and efficiency, and with more cores at the high-end.

Intel sub-divides its CPUs in two ways. First, by the Core i-series number:

  • Core i9: Eight cores, 9900 with HT, 9700 without HT
  • Core i7: Eight cores, no HT
  • Core i5: Six Cores, no HT
  • Core i3: Four Cores, no HT
  • Pentium Gold: Two Cores, HT
  • Celeron: Two Cores, no HT

Then, each processor may have an additional suffix related to certain features that are enabled or disabled:

  • K = Overclockable
  • KF = Overclockable with No Integrated Graphics
  • No Suffix = Standard CPU, 54-65W TDP, Integrated Graphics
  • F = No Integrated Graphics
  • T = Low Power, 35W TDP

The idea here is that the name of the processor should tell you all that you need to know about what the processor has available. Aside from the odd difference in the Core i9 section, it’s mostly all there.

New for the 9th generation CPUs is the F suffix, meaning no integrated graphics. We’ve commented on these parts before, but ultimately it would appear that Intel’s manufacturing ability to yield cores is better than its ability to yield graphics at the correct frequencies, so in order to maximize $ per square millimeter, it is now offering graphics-free versions of its popular CPUs. These parts are priced the same with or without graphics, which just goes to show you how much Intel values its current graphics implementation. As always, it will be interesting to see how much Intel can yield between F versions and regular processors.

Regarding normal processors and the lower power 35W TDP ‘T’ processors, the main variation here is in the base frequency. It should be noted that Intel’s TDP ratings are only valid at the base frequency of the processor, so even if the CPU has a high turbo, its peak power consumption during turbo may be a lot higher than the TDP value (Intel defines a Power Limit 2 value which is often 25% higher, but motherboard manufacturers usually ignore this). This is an attribute given solely by Intel CPUs, exacerbated by the motherboard manufacturers going beyond specifications, and we’ve detailed the issue in previous articles. Click the follow link to find out more:

https://www.anandtech.com/show/13544/why-intel-processors-draw-more-power-than-expected-tdp-turbo

Each CPU has a qualified memory support of DDR4-2666 for Core i5 and above, or DDR4-2400 for Core i3 and below. This means that while processors may support higher, Intel does not make any assurances as to whether it will work. All the processors are aligned with Intel’s Optane H10 storage solution, which was announced yesterday. Support with H10 is almost arbitrary, as H10 has to work with other CPUs as well.

We’re going to go through each of the sub-divisions, hopefully making the naming and numbering clearer.

Intel 9th Generation Core CPUs
Core i9
AnandTech Cores
Threads
Base
Freq
Turbo
Freq
L3
Cache
DDR4 IGP TDP Price
(1ku)
i9-9900 K 8C / 16T 3.6 GHz 5.0 GHz 16 MB 2666 Y 95 W $488
i9-9900 KF 8C / 16T 3.6 GHz 5.0 GHz 16 MB 2666   95 W $488
i9-9900   8C / 16T 3.1 GHz 4.9 / 5.0* 16 MB 2666 Y 65 W $439
i9-9900 T 8C / 16T 2.1 GHz 4.4 GHz 16 MB 2666 Y 35 W $439
i9-9700 K 8C / 8T 3.6 GHz 4.9 GHz 12 MB 2666 Y 95 W $374
i9-9700 KF 8C / 8T 3.6 GHz 4.9 GHz 12 MB 2666   95 W $374
* i9-9900 supports Intel Thermal Velocity Boost for +100 MHz Turbo

There are two main numbers in the Core i9 section, the 9900 family with HyperThreading, and the 9700 family without. Both the i9-9900K and i9-9700K have corresponding F versions without integrated graphics, as well as 35W ‘T’ versions which have much lower base frequencies.

Interestingly, Intel’s official documentation lists the Core i9-9900 as a 4.9 GHz processor, or 5.0 GHz when ‘Intel Thermal Velocity Boost’ is enabled and valid. If you’re wondering what Intel Thermal Velocity Boost is, so were we – Intel has never specifically mentioned it in any previous meeting or briefing, and it suddenly appears in a processor list slide. The slide actually lists the turbo as 5.0 GHz*, with the asterisk leading to a footnote which clarifies that it is 5.0 GHz when ITVB is enabled. It’s very sneaky how they’ve done that, and easy to miss if you are just skimming the spec sheet. Also doubling down on the awkwardness, the Core i9-9900 is the only processor in the whole stack that has this feature. Why just this one? I can guess the PR answer, but the real answer? Is Intel just trialling a feature? How is this feature going to be interpreted by the motherboard manufacturers? Are they going to butcher this one as well? Intel just opened a can of very specific worms that it pulled from a box it didn’t tell us about.

Intel 9th Generation Core CPUs
Core i7
AnandTech Cores
Threads
Base
Freq
Turbo
Freq
L3
Cache
DDR4 IGP TDP Price
(1ku)
i7-9700   8C / 8T 3.0 GHz 4.7 GHz 12 MB 2666 Y 65 W $323
i7-9700 F 8C / 8T 3.0 GHz 4.7 GHz 12 MB 2666   65 W $323
i7-9700 T 8C / 8T 2.0 GHz 4.3 GHz 12 MB 2666 Y 35 W $323

Moving onto the Core i7 parts, and we immediately have a problem. Here Intel has listed three Core i7-9700 CPUs. But wait, didn’t we have i9-9700 parts with the Core i9 family? Yes, we did. Intel has decided (or rather, someone at Intel wants to confuse everyone) that the 9700K processors should be Core i9, while the non-K parts should be Core i7. The differences between the i9 and i7 versions are just frequency and TDP, while we still have 8 cores without HT, and I can fully understand the desire to have a distinct separation between the two, but this is ultimately a shot at both obfuscation and confusion for partners, OEMs, users, and followers of Intel’s product line. Perhaps to make matters even worse, we have Core i9-K. We don’t have Core i7-K. We have Core i5-K, and Core i3-K.  It makes me wonder if this naming and segmentation strategy is ever passed through a focus group beyond subsets of engineers or marketing.

Intel 9th Generation Core CPUs
Core i5
AnandTech Cores
Threads
Base
Freq
Turbo
Freq
L3
Cache
DDR4 IGP TDP Price
(1ku)
i5-9600 K 6C / 6T 3.7 GHz 4.6 GHz 9 MB 2666 Y 65 W $262
i5-9600 KF 6C / 6T 3.7 GHz 4.6 GHz 9 MB 2666   65 W $262
i5-9600   6C / 6T 3.1 GHz 4.6 GHz 9 MB 2666 Y 65 W $213
i5-9600 T 6C / 6T 2.3 GHz 3.9 GHz 9 MB 2666 Y 35 W $213
i5-9500   6C / 6T 3.0 GHz 4.4 GHz 9 MB 2666 Y 65 W $192
i5-9500 F 6C / 6T 3.0 GHz 4.4 GHz 9 MB 2666   65 W $192
i5-9500 T 6C / 6T 2.2 GHz 3.7 GHz 9 MB 2666 Y 35 W $192
i5-9400   6C / 6T 2.9 GHz 4.1 GHz 9 MB 2666 Y 65 W $182
i5-9400 F 6C / 6T 2.9 GHz 4.1 GHz 9 MB 2666   65 W $182
i5-9400 T 6C / 6T 1.8 GHz 3.4 GHz 9 MB 2666 Y 35 W $182

The Core i5 range is relatively traditional, featuring the 9600, 9500, and 9400 parts with some variants. The 9600 gets a K, a KF, and a T, whereas the 9500/9400 gets an F and a T only. Interestingly, the K parts here are the only overclockable members on the stack that have a TDP of 65W, compared to 91W or 95W. These parts offer increased base frequency (3.7 GHz vs 3.1 GHz), although have a tray pricing (1k unit purchase) of $49 higher than the non-overclockable parts.

Intel 9th Generation Core CPUs
Core i3
AnandTech Cores
Threads
Base
Freq
Turbo
Freq
L3
Cache
DDR4 IGP TDP Price
(1ku)
i3-9350 KF 4C / 4T 4.0 GHz 4.6 GHz 8 MB 2400   91 W $173
i3-9320   4C / 4T 3.7 GHz 4.4 GHz 8 MB 2400 Y 62 W $154
i3-9300   4C / 4T 3.7 GHz 4.3 GHz 8 MB 2400 Y 62 W $143
i3-9300 T 4C / 4T 3.2 GHz 3.8 GHz 8 MB 2400 Y 35 W $143
i3-9100   4C / 4T 3.6 GHz 4.2 GHz 6 MB 2400 Y 65 W $122
i3-9100 F 4C / 4T 3.6 GHz 4.2 GHz 6 MB 2400   65 W $122
i3-9100 T 4C / 4T 3.1 GHz 3.7 GHz 6 MB 2400 Y 35 W $122

The Core i3 also follows its traditional scheme, with the 9350, 9320, 9300, and 9100 parts, the latter having slightly lower L3 cache per core and are priced accordingly. The 9350 is available as a K or a KF, but no standard SKU: instead, users can have the 9320. Only the 9300 and 9100 get low power T versions, and pricing within the Core i3 line is stable compared to the previous generation. It should be noted that the Core i3 parts (and below) only have qualified support up to DDR4-2400, rather than DDR4-2666 supported by the Core i5/i7/i9 processors.

I should point out that Intel is still not offering a quad-core for under $100 to compete with AMD’s Ryzen 3 2200G. The APU from AMD has four full Zen cores along with Vega graphics, dismantling any graphics workload compared to Intel’s offering, and it comes bundled with a good 65W cooler, whereas it’s a question mark sometimes if Intel’s CPUs come with a cooler (in order to meet tray pricing, probably not). Intel’s cheapest quad-core is the i3-9100, which is likely to offer better single threaded performance, but would be 30% more expensive at retail. If you can find one, that is – there are 2200G parts available almost everywhere.

Intel 9th Generation Core CPUs
Pentium Gold and Celeron
AnandTech Cores
Threads
Base
Freq
Turbo
Freq
L3
Cache
DDR4 IGP TDP Price
(1ku)
G5620   2C / 4T 4.0 GHz - 4 MB 2400 Y 54 W $86
G5600 T 2C / 4T 3.3 GHz - 4 MB 2400 Y 35 W $75
G5420   2C / 4T 3.8 GHz - 4 MB 2400 Y 54W
58W
$64
G5420 T 2C / 4T 3.2 GHz - 4 MB 2400 Y 35 W $64
G4950   2C / 2T 3.3 GHz - 2 MB 2400 Y 54 W $52
G4930   2C / 2T 3.2 GHz - 2 MB 2400 Y 54 W $42
G4930 T 2C / 2T 3.0 GHz - 2 MB 2400 Y 35 W $42
* G5420 can be derived from dual die (54W) or quad die (58W), see below

The Pentium Gold/Celeron parts bring up the cheaper end of the spectrum, from $42 up to $86. They are all dual cores, with the Pentium Gold CPUs supporting HyperThreading. The Celeron parts also have the smallest amount of L3 cache per core, with only 1 MB. The odd CPU from the bunch is the Pentium Gold G5420, which comes in 54W and 58W variants. Intel has done this before: one version of this CPU is derived from a dual core die (54W), while the other is a cut down quad-core variant (58W). In the past these two different models have had different part numbers, so users might be able to track which one they get. If there isn’t a part number listed on the retailer, then it’s a pot luck based on Intel’s binning and what they have in stock.

For these processors, users will have to pair them with a 300-series chipset. There is no new 300-series chipset launch today, so users can rely on the Z390/Z370/Q370/B360/H350/H310 models already in the market. Depending on the model chosen, they will have a number of PCIe lanes available, a number of SATA ports, a number of USB ports, and potentially some integrated Wi-Fi as well. It is up to the board manufacturers to support these features, or use corresponding controllers.  It should be noted that with a firmware upgrade for the newest processors, most motherboards should start supporting Samsung’s new 32 GB memory modules, allowing for a total 128 GB of memory support on these CPUs (two DIMMs per channel, two channels).

Intel hasn’t reached out to us about reviewing any of these new processors, so if you have any thoughts of what parts you want to see tested, please let us know.

Over the page is our coverage of Intel's new Mobile processors, up to 5.0 GHz*.

The 45W range of processors from Intel fits into the high-performance / prosumer niche of portable gaming laptops and workstations. These typically populate the 15.6-inch and 17.3-inch devices, going from a basic gaming system with a discrete graphics card all the way up to DTR, or DeskTop Replacement hardware, that takes the place of a full on desktop in a (insert non-committal gesture) mobile sort of form factor that weighs almost double digits in pounds.

Intel has recently released some mobile processors into the market, such as Whiskey Lake at 15W on 8th Gen, but this is the first proper outing for high performance 9th Gen in a mobile form factor. At this point, we’re not seeing a replacement for Kaby Lake-G, where Intel paired a H-series CPU with a Radeon GPU in the same package, so it will be interesting to see if that gets a refresh later this year.

Intel 9th Generation Core CPUs
Mobile 45W H-Series
AnandTech Cores
Threads
Base
Freq
Turbo
Freq
L3
Cache
DDR4 OC TDP
i9-9980 HK 8C / 16T 2.4 GHz 4.9 GHz* 16 MB 2666 Y 45 W
i9-9980 H 8C / 16T 2.3 GHz 4.7 GHz* 16 MB 2666   45 W
i7-9850 H 6C / 12T 2.6 GHz 4.6 GHz 12 MB 2666 ish 45 W
i7-9750 H 6C / 12T 2.6 GHz 4.5 GHz 12 MB 2666   45 W
i5-9400 H 4C / 8T 2.5 GHz 4.3 GHz 8 MB 2666   45 W
i5-9300 H 4C / 8T 2.4 GHz 4.1 GHz 8 MB 2666   45 W
* i9 CPUs support Intel Thermal Velocity Boost for +100 MHz Turbo

Enter the Musclebook: Intel is introducing the new ‘Musclebook’ name for the DTR equivalent devices. Ultimately these are likely to be paired with the high end Core i9 processors. Intel has two parts here, the 9980HK which allows for overclocking, and the 9980H. The 9980H equivalent is new to this processor stack, based on requests from Intel’s partners that they wanted something ‘as fast’ as the top HK model, but not actually overclockable – it turns out that if you stick a HK in a system, users expect to be able to push it, and OEMs wanted equivalent performance without having to build in support for overclocking.

Both the 9980HK and 9980H supports Intel’s Thermal Velocity Boost, giving an additional 100 MHz if the thermal performance of the hardware allows it. Again, Intel doesn’t specify what requirements those are, of if manufacturers can ignore them, or if it’s enabled by default etc. It could be somewhat misleading to include those values into the single core turbo frequencies, however with mobile platforms we’ve seen such a wide range in PL2 values set in hardware due to the form factor, there are a wide range of single core turbo frequencies that don’t match up to the SKU list anyway – this is OEM and design dependent, so there isn’t much fuss from us on this.

There are two Core i7 parts, with six cores and hyperthreading, and the Core i7-9750H supports ‘Partial Overclocking’. In Intel terminology, this means that the CPU can be up to 400 MHz higher if the OEM sets it as such, allowing the CPU to turbo up to 5.0 GHz. That will be extremely device dependent, and given the way that most OEMs deliver their specification sheets, it will be interesting to see if any of them actually list if this is the case, or just take the 4.6 GHz and not tell anyone.

The two Core i3 parts bring up the rear, with four cores and hyperthreading. This means Intel still makes quad cores with hyperthreading, even though they have disappeared from the desktop product line.

Given the tight integration of mobile chipsets into the products, expect to see a few new devices enabled with Intel’s new AX200 Wi-Fi 6 card that was launched last week. The mobile chipsets are also listed as supporting Samsung’s new 32 GB memory modules, so we will likely see some high-end ‘Musclebooks’/DTR replacements using those, at extreme cost to the user. Intel is again stating Optane storage support on these devices, as well as TB3 support when additional controllers are included.

With the annual Computex trade show around the corner (last week of May), we’re expecting to see a smorgasbord of devices being offered with the new H-series parts: both refreshes of old models and perhaps some new ones in the mix. Stay tuned for our coverage from the show.

Let's block ads! (Why?)


https://www.anandtech.com/show/14256/intel-9th-gen-core-processors-all-the-desktop-and-mobile-45w-cpus-announced

2019-04-23 13:40:24Z
52780275427516

Galaxy Fold delay a setback for Samsung, but it could've been way worse - CNET

galaxy-galaxy-fold-84

The screen on the CNET review unit of the Galaxy Fold hasn't malfunctioned. But that doesn't mean there aren't problems. 

Angela Lang/CNET

Samsung should be taking a victory lap right now for its innovative Galaxy Fold. Reviewers should be singing the praises of the first major foldable smartphone, which was supposed to launch Friday. The only controversy should be whether that $1,980 is really worth it.

But we don't live in that reality. Instead, Samsung on Monday delayed the launch of the Fold following reports that some of the small number of devices seeded to reviewers began to malfunction or break. The CNET review unit, handled by editor Jessica Dolcourt, hasn't suffered the same issues.

Now playing: Watch this: Will Galaxy Fold screen flap derail the foldable phone...

4:40

The delay is not just a black eye for Samsung, but for consumer confidence in foldable phones in general. These flexible and bendable devices are supposed to represent a revolution for smartphones, but they can hardly take off if people are worried about their durability. And given the high prices for these devices (just look at the $2,600 Huawei Mate X), you have a right to expect that this phone, well, actually works.

But here's the thing: This whole mishap could have been so much worse.

Samsung should be thanking every reviewer who played with an early unit of the Fold. Just imagine if units got out to the wider public. The outcry would be far greater, as would the criticism that Samsung rushed out a half-baked product just to be "first."

The knee-jerk reaction would be to compare this incident to the Galaxy Note 7 debacle, where Samsung slowly responded to the initial reports of the devices catching fire, only to have it blow up, quite literally, in its face. That Samsung is taking the high-profile, if embarrassing step of delaying the launch shows it's learned its lesson.

"Samsung delaying the Galaxy Fold shows maturity," Avi Greengart, an analyst at Techsponential, said in a tweet.

Samsung, which has built over 4 billion phones since 1988, was caught by surprise by the Note 7 problems. Like most companies in the mobile industry, Samsung had counted on its battery suppliers to conduct safety tests before putting the batteries in devices. As it turned out, those suppliers didn't catch the errors that caused the Note 7 to overheat.

The Note 7 debacle caused Samsung to be a bit more cautious with its subsequent devices. It packed a smaller battery into 2017's Galaxy S8 and instituted a more rigorous battery testing process. But its new procedures, largely designed to detect battery problems, didn't uncover the issues experienced by the Galaxy Fold's display. 

galaxy-galaxy-fold-47

The inward fold of the Galaxy Fold adds strain to the device. 

Angela Lang/CNET

The screen failures would've been amplified had they reached consumers and not just early reviewers. The foldable phone market is on wobbly, hype-filled legs, and the first impressions so far have been mixed. The Royole Flexpai was an interesting, but buggy product. The Mate X impressed people at MWC 2019, but it wasn't widely available to test. Now, there's this controversy. A few bad products could blow this trend before it has a chance to become a thing.

Ahead of the launch, other industry players noted that the inward fold of the Samsung device adds strain on the display, which is why Huawei and Royole opted to do displays that folded out. Samsung reduced some of the tension on the fold by including the large gap in between the two folded sides, as well as added that screen protector that we only now know is something you absolutely need to keep on the phone. 

Samsung says it's identified a possible early cause. The company said the initial findings from its investigation found potential damage from the impact of the exposed areas of the hinge at the top and bottom of the phone. It also said substances found inside the device might be affecting the display performance.

"We will take measures to strengthen the display protection," Samsung said in a statement. "We will also enhance the guidance on care and use of the display including the protective layer so that our customers get the most out of their Galaxy Fold."

While there aren't any victory laps in Samsung's near future, the company has the opportunity to fix its issues with the Fold early and preserve the prospects for the broader foldable smartphone market -- assuming it's nothing catastrophic that requires a full redesign.

It only took a handful of defective review units. That's a small price to pay. 

Shara Tibken contributed to this story.

The story originally ran on April 22 at 11:28 a.m. PT.

Let's block ads! (Why?)


https://www.cnet.com/news/galaxy-fold-delay-a-setback-for-samsung-but-it-couldve-been-way-worse/

2019-04-23 12:00:00Z
52780272877586

Teen’s $1B suit claims Apple’s facial recognition software led to false arrest - Fox News

He’s trying to save face.

A New York man filed a $1 billion lawsuit against Apple, claiming the tech giant’s facial-recognition software wrongly blamed him for stealing from Apple stores.

Ousmane Bah, 18, claims someone used a stolen ID to pass themselves off as him when they were busted stealing $1,200 worth of merchandise from an Apple store in Boston on May 31, 2018, according to papers filed in Manhattan federal court.

The ID listed his name, address and other personal information — but did not include a photo. Bah believes Apple took the perp at his word, and then programmed its security systems to recognize the man’s face as Bah’s.

The thief then ripped off Apple stores in New Jersey, Delaware and Manhattan — incidents Bah was blamed for, the suit claims.

He only learned about the mix-up after receiving a Boston municipal court summons in the mail in June, according to court papers.

The NYPD arrested him on Nov. 29, but a detective working the case viewed surveillance footage from the Manhattan store and concluded that the suspect “looked nothing like” Bah, his lawsuit states.

Charges against Bah have been dropped in every state except New Jersey, where the case is still pending.

Apple’s “use of facial recognition software in its stores to track individuals suspected of theft is the type of Orwellian surveillance that consumers fear, particularly as it can be assumed that the majority of consumers are not aware that their faces are secretly being analyzed,” the lawsuit states.

Apple did not respond to a request for comment.

This story originally appeared in the New York Post.

Let's block ads! (Why?)


https://www.foxnews.com/tech/teens-1b-suit-claims-apples-facial-recognition-software-led-to-false-arrest

2019-04-23 12:16:56Z
52780275050660

NVIDIA Launches GeForce GTX 1650: Budget Turing For $149, Available Today - AnandTech

[unable to retrieve full-text content]

  1. NVIDIA Launches GeForce GTX 1650: Budget Turing For $149, Available Today  AnandTech
  2. NVIDIA's GTX 1650 GPU delivers modern gaming for $149  Engadget
  3. Nvidia's Cheapest Graphics Card Gets Nearly Two Times Faster for Better Gaming on a Budget  Gizmodo
  4. Nvidia’s new GTX 1660 Ti and 1650 could power your next budget gaming laptop  The Verge
  5. Nvidia GeForce GTX 1650 review in progress  PC Gamer
  6. View full coverage on Google News

https://www.anandtech.com/show/14255/nvidia-launches-geforce-gtx-1650-budget-turing-for-149-available-today

2019-04-23 13:19:17Z
52780275222677