| JEMC said: Congrats, Yuri. To all Zen owners, this is of interest: New Ryzen Chipset Driver Patches Security Vulnerabilities Right now, AMD has two vulnerabilities affecting all Ryzen based CPUs listed on AMD's website. One of these includes a Speculative code store bypass and floating-point value injection vulnerability, which can cause Ryzen CPUs to leak data when processing overwritten instructions when the CPU processes incorrect floating point numbers. The second vulnerability also relates to data leakage, called the Transient Execution of Non-canonical Accesses. This vulnerability can allow data leakage when the CPU executes a non-canonical load and stores those numbers with just 48 address bits or lower. We're not completely sure these are the flaws AMD's new PSP driver has fixed. But either way, it would be best to update your Ryzen chipset drivers to this new version to insure you aren't exposed to unknown vulnerabilities. Especially when AMD classifies these security flaws as "critical." You can download the latest driver here. |
Thanks and thanks for the info. I am in the mood for downloading stuff so I'll probably download this asap lol.
| Conina said:
Nice! |
Thanks and it's unlimited data so no datacap. On my old internet which was 150mbps down, I did 1.3TB on average and sometimes 1.8-2TB.
| Bofferbrauer2 said: Techpowerup checked how much only having 8 lanes slows you down, by comparing 8 lanes of PCIE 4.0 with the same amount of lanes on PCIE 3.0, 2.0 and even 1.1.: https://www.techpowerup.com/review/amd-radeon-rx-6600-xt-pci-express-scaling/ The result is... a whole lot of nothing, really. On average at 1080p, where the drop is the largest in most cases, having to resort to PCIE 3.0 instead of 4.0 costs you a whole 2% of performance - and that's both rounded up and with 2 outlier with Hitman 3 (13.5%) and Death Stranding (6.8%). In most cases, the difference was barely measurable, let alone felt. Going from DDR4-3200 to 3600 has probably a higher effect on the FPS than having just 8 lanes of PCIE, 4.0 or otherwise. In fact, even just using PCIE 2.0 would only result in a 7% performance drop (again mainly due to the outlier), and only PCIE 1.1 where the difference was really felt and the average performance drop was 17%. By extrapolating the curve, they come to the conclusion that PCI4.0 x16 would have added that whole lot of 1% to the performance average, so really nothing to lose sleep about. That being said, there's no guarantee that it will stay that way. It could be that in the future, those speeds could in fact really bottleneck enough to be truly felt. But that probably won't be before a couple years down the road.
How long is the Geekbench test? If it's short enough, then it could have been performed completely or almost entirely under full boost, which is only marginally slower than on the K models. This is also one of the reasons why I take Geekbench results with a giant grain of salt, as they are pretty far removed from reality half the time. |
Yea different reviewers seems to be getting different results with the test. I am assuming it depends on how they are bench marking the games. Like hardware unboxed/techspot saw 25% difference in performance in doom eternal and up to 5% faster on average on the rest of the games when switching from Gen 3 to Gen 4. But yea, not every game/scenario will see a big difference.
I don't really test Geekbench myself so not too sure but I think it's fairly short.
PC Specs: CPU: 7800X3D || GPU: Strix 4090 || RAM: 32GB DDR5 6000 || Main SSD: WD 2TB SN850









