It is a fair amount quieter than a window unit air conditioner simply because it doesn't have a powerful fan blowing air into the room. It only has a small fan on the condensor. It's like a little refrigerator or air conditioner, so it puts out heat. The idea scenario (unless you like that) would be to have a way of ducting the exhaust out a window or have the chiller in closet that can be pumped full of hot air without causing a problem.
-
-
Robbo99999 Notebook Prophet
Strange, just the Combined Score that is significantly worse on the latest Windows whereas it's better than Windows 7 in the individual Physics & Graphics tests. That's a strange anomaly. -
Robbo99999 Notebook Prophet
Wow, really nice high clocks, but high voltages too, what you using to cool that again, delidded and all that malarky right?Mr. Fox likes this. -
After version 1809, Windows 10's degradation of CPU performance and memory performance escalated. From 1903 forward everything relating to CPU performance and benchmark scores have diminished, thanks to the incompetence of the Redmond Retards. Disabling vulnerability mitigations doesn't change anything, the performance still just sucks in comparison. The latest (2004) is further diminished from 1903 and 1909. You will see that disappointing trend with benchmarks like Cinebench and wPrime as well as the 3DMark combined tests. This is literally NOTHING to look forward to from Micro$lop any more. These bastards are brain-dead imbeciles, and they're worthy of death at this point. They're literally turning Windows PCs into XBOX zombie-box feces. All they seem capable of producing at this point is broken trash. It wouldn't hurt my feelings any at this point to see them go bankrupt and shut down. We could all switch to Linux and that would become the new normal.
Windows 10: The beginning of the end for Control Panel
Are Windows 10 Control Panel's days numbered?
Seems that everything we thought we knew before has changed with Comet Lake and 1.500V is no longer considered "high voltage" by Intel. I RMA'd my Aorus Master motherboard thinking something was wrong with it. The ASUS board is the same and after doing more research this is the new normal.
If you look at an ASUS BIOS the voltage values are color-coded. White is basic/stock range. Yellow is elevated, pink is more elevated and red is when you begin to enter the higher risk zone. If voltage monitoring is enabled, the system will not boot when the voltage is set to something in the danger zone. It will halt with a voltage protection error and you have to press F1 to enter the BIOS and lower the voltage, or turn off voltage monitoring. That doesn't happen at 1.500V like it used to. It is now 1.700V.
And, if you look at the legend at the bottom of the screen, 1.700V is the max in the "normal" (non-LN2) voltage range. It seems very strange to me, but I am no longer questioning it even though it still seems weird.
Last edited: Sep 14, 2020 -
Robbo99999 Notebook Prophet
Wow, so has the new range of higher safer voltages been widely acknowledged over the internet by reviewers & overclockers?Mr. Fox likes this. -
I wouldn't say "widely" but people are beginning to realize it. Initial reactions across the board were similar to mine. Running stock, voltage is what we're used to seeing. They only start to seem scary as the core clocks rise. Some people seem upset by it and others are just accepting it and moving on. And, oddly enough, temperatures are not insanely high in my opinion considering the added voltage. If you would have set voltage that high on prior generation CPUs they would have been on the verge of overheating with no load.
And, the voltage required to run 5.5GHz now isn't really much different than what we would have expected to need to run 5.5GHz on 8th or 9th Generation CPUs. So, that part isn't that much different, only the danger threshold.electrosoft, Johnksss, Papusan and 1 other person like this. -
You could look at window seals for portable ACs and rig an insulated hose like used for dryers and portable ACs to it. It would take some rigging for connecting it to the chiller, but the house to the window can be done fairly easily.
-
Yes, the hardest part would be to fabricate a plenum to fit the grille in the back and funnel the exhaust down to a flexible hose size just like my portable AC unit. The biggest drawback to doing that is the same drawback to using the portable AC unit. It takes up extra space and is very inconvenient unless you have a huge room with a lot of extra space. The hose adds bulk and keeps you from placing it close the to wall near a window. Another good option (probably easier than fabricating something with the flexible hose) would be to have it on a table directly in front of a window and simply make a panel that fits in the window opening with a square box/plenum to blow the air straight out of the back of the chiller through the opening in the window panel. That would likely be more efficient for the chiller operation since it would be less restrictive than using a flexible hose.
I just tolerate the exhaust heat and put a fan near the back of the chiller to suck the hot air away from the back of it and blow it toward the door to my office to keep it from building up around my desk seating area. I don't use it all of the time and I don't do as much benching as I used to, so putting up with the hot air once in a while when I do isn't a big deal.
I think the underlying story behind this is it is the only way Intel can even pretend to keep up with Ryzen performance. The only solution available to them is to pour on the voltage and crank up the clock ratio. I still prefer Intel because I love overclocking and Ryzen just doesn't do that well at all with CPU or memory overclocking, but it is kind of pathetic on Intel's part in terms of practicality. If you could overclock Ryzen CPUs like this it would be truly amazing. If it is safe now (and I believe it is) it makes me wonder why it wasn't safe before. I don't know what they could have done to change that part of the equation. Maybe what was considered safe max voltage before was overly conservative. Or, maybe now they are throwing caution to the wind. Hard to say.Last edited: Sep 14, 2020 -
So, my last motherboard was total garbage. And I’d been wanting to run the Aida 64 CPU benchmarks for sometime. But, they’d always instantly power off my PC.
And so now that I have a X299 Dark, I know why it was powering down. The power draw is crazy. At only 4.5Ghz 1.175V with no AVX2/3 offsets at all.
Mr. Fox likes this. -
That along with raising the CPU temp threshold max. (99C or 100C to 115C)Mr. Fox likes this.
-
You also have a good CPU sample and that definitely seems to be a major contributor. It doesn't seem I can give mine enough voltage to bench it stable past 5.5GHz. I have gone up to 1.700V with the chiller getting the CPU core temps down to near zero and 5.6GHz is bootable and can do a CPU-Z bench/validation. That's it. If I try running a 3DMark test, wPrime, Cinebench, or just about anything else, I get BSOD with WHEA or IRQL NOT LESS THAN issues. 5.7GHz is not bootable even into the BIOS. I have to clear the CMOS. Otherwise, just a black screen and no boot.
That definitely helps. I haven't hit 90°C but I think the more distance there is to the max temp threshold the better the CPU behaves. -
Your temps are near zero, mine are -50@die and And maybe -11 to -30 in windows. Hard to tell as it only shows -11 Max.
And this cpu is average. As I could not boot past 53 before.
Might try to grab another one and see if its really average or not.
And the higher temps are more for those with everyday clocks and cooling.
Edit:
Your GPU is bomb! Mine can't get past 2175/2050. -
I keep forgetting about that, LOL. My CPU might do as well if I were getting it that cold. Not sure. Will yours run stable above 5.5GHz with the chiller, or only with the phase change pushing it to sub-zero temps?
I am really pleased with this GPU. It was average before it got shunt modded and had the ability to lie to the firmware about how much it was consuming in terms of volts and watts. The shunt mod turned this GPU, the 2080 Ti before it and my old 1080 Ti into real monsters with the right firmware. (Shunt mod alone wasn't enough because the stock firmware was crap, too.) The 2080 Ti I had before I got this FTW3 from Brother @Rage Set was ruined by Micron memory though. The core overclocked nicely, but the Micron memory absolutely sucked.
I am not sure how much better a 3080 or 3090 would perform compared to this modded FTW3. But, I would like to get my hands on a 3090 and do the same hack job to it after good firmware surfaces.
Edit:
I forgot to ask. Are you using LN2 mode with the phase change? If you are could that also be having some effect on CPU behavior? I've never enabled LN2 mode before since I don't know what it does exactly, and I have never done sub-zero cooling. I wish you did not live so far away. If you were closer we could drop this CPU into your mobo and see how it does on sub-zero temps.Last edited: Sep 15, 2020Johnksss likes this. -
Anyone ever have DDR4 just start to
Crap out on them? I had my memory 100% stable at XMP 3733 17-21-21-41 with around 1.4 volts. It is (2) identical sets of 2x8GB but one is Samsung b die, and the other is Hynix.
Anyways, so I started my overclock from scratch. CPU OC check, cache OC check, then memory OC/XMP no good... getting BSODS at frequencies that have passed perfect in memtest and anything else for that matter.
I was testing higher voltages a lot with this DDR4 running 1.450 to 1.500 volts. Trying to push lower timings, with higher frequencies.
But, night before last my system started to randomly BSOD
I am convinced I’ve damaged my memory? I can barely run 3600Mhz with higher timings of 17-22-22-42 without getting a BSOD after just signing in to windows. So, I thought my OS was corrupt, then I went to reinstall Windows LTSC, and it BSOD’s half way through the installation
I know this memory so well. And everything else is stable. So, did I damage the memory? Or is it degradation?
So far it is stable at default 2133Mhz C15 lol. This memory was perfect. I was so happy to finally get it stable at XMP with a few tweaks.. And it only held up for about a week.Last edited: Sep 15, 2020 -
Nah, you didn't damage it. No way. That B-die is good for 5000MHz and 2.100V. I have run my sticks of B-die at 1.600V for many hours without issue. Try using just the B-die in dual channel. Then try just the Hynix in dual channel.
Hynix = unstable junk -
I think it will now. Was on 5.4 Ghz yesterday getting my ram stable. I'll check when I stock it back up.
Will the shunt mod work on a Gigabyte Winforce OC 2080ti? I sold my Kingpin before gpus dropped to an all time low and grabbed a super cheap 2080 Ti to hold till the 3 series is in my hands.
Hummm, this gpu has micron memory as well, but on cold water it seems to do a bit better. +800/Mem was a struggle on air.
Nope, I turnt it on once and notice my temp readings jumped through the roof in the bios. I thought something broke. I put it back and temps dropped back to a negative. Note sure what was going on, but one day i'll try to understand what happen.
That is true.Mr. Fox likes this. -
You may want to pop out your cpu and set it back in place. And do not over tighten, just tighten it enough to have amp pressure. Then boot into windows and see what the last bugcheck errors are.Mr. Fox likes this.
-
Ok, I’ll try it. I am getting random BSODS just from using simple things like Microsoft word. Windows is a fresh install.
The last (2) in a row were.
“Bad Pool Header”
“Memory Management”Mr. Fox likes this. -
So, I resolved the issue. Turns out my memory is just garbage.
I passed memtest86 at 3,400Mhz max. Anymore and it fails with numerous errors! Coincidentally the exact same 3400Mhz that I managed on my 8086K with this exact same memory set.Convel, temp00876, Johnksss and 1 other person like this. -
electrosoft Perpetualist Matrixist
3080 up to 30-35% faster (depending on games) and $400+ cheaper than a 2080ti
Interesting observation that on average the 3080 @ 1440p is the 2080ti @ 1080p.
That's a win all around.
I'm hoping the leaked benchmarks for the AMD 6000 series showing it potentially equals a 2080ti are for the lesser BigNavi (fingers crossed).
Either way, the 3080 nearly doubles my 5700xt in some tests, ouch.Papusan, Johnksss, ajc9988 and 1 other person like this. -
Yes, it certainly will. I have a few spare resistors if you want me to send you a few. You only need two to do the job. They're dirt cheap but the cost of postage from Digi-Key or Mouser makes them expensive, LOL. I am more than happy to give you a couple of them rather than you wasting money on buying them. With that Galax 2000W vBIOS it will make that Windforce GPU sing (excluding the memory gimping part). It will help even with the crippled stock firmware, just not as much because the voltage will still be gimped really bad.
It is a sin and a crime to castrate any top-end model GeForce GPU with crappy Micron or Hynix memory. They should save that garbage RAM for the lower-priced consumer and budget-conscious gamer-boy trash GPUs.
It really is hard to complain about it from a mathematical perspective, other than the price bar was totally absurd and downright asinine before and it's less absurd now. But, it's definitely the numbers moving the right direction. That's always better than the alternative.Last edited: Sep 16, 2020 -
Yeah, sorry, but the 30-35% contradicts the 25% average of Gamers Nexus (which has a bit more there and are very careful of not just eyeballing gains).
GN also showed you are going to need power mods to OC the cards because OC performance SUCKS!
Further, I disagree with JayzTwoCentz on one thing: what card to compare it to. He mentioned you should compare it to the 2080 and 2080 Super. Problem is those are 104 dies and the 3080 is a 102 die like the 2080 Ti, making that the proper die to compare it to (as well as the 3090).
But overall, looks good.
Edit: Both GN and Jayz correctly state that this card is 75% or there about faster than a 2080. Also, this is the primary card I do recommend from Nvidia for generational uplift, or generally. -
Based on Jay's benchmarks (for whatever than is worth) Time Spy GPU score is about 1,000 3DMarks higher than my max overclocked score with shunt modded 2080 Ti and 2000W vBIOS. Time Spy Extreme is about 600 3DMarks higher. Not sure what the difference is with mine running default clocks since I never really capture that data.ajc9988 likes this.
-
Yeah, GN showed only a couple percent gains on overclocking while sucking down like 30-40 more Watts, and hitting voltage limits.
The last two were not stable and returned lower performance than stock.
Edit: And to be clear, AMD seems to do no better on overclocking, because much above 2.23GHz the logic breaks down and starts returning errors, so likely not going to be good for sub-ambient OC, whereas Nvidia will have to have power mods to get there, but CAN get there.Last edited: Sep 16, 2020Mr. Fox likes this. -
I disagree with the hype surrounding the comparison of 3080 with 2080 Ti. I can see why they are doing it. They don't have anything else available for comparison yet. But, it is supposed to be cheaper... it's not a Ti. I don't know if 3090 is going to be top dog, or if there is going to be a 3090 Ti, or what. None of that has been disclosed yet. But, most people that hold out for the top dog GPU probably don't really care about that comparison. It's interesting and gives people something to talk about, and some fodder for the media to toss out to the irrationally impulsive gamer-kids. But, I'm not planning to buy a 3080 to replace a 2080 Ti. That would be a waste of money and I'd ultimately wished I had waited and not spent any money on a second-fiddle GPU. I'm waiting until I know what is the direct successor to the 2080 Ti and going for that. The gap is going to be larger unless the Series 3 Ti successor has something seriously wrong with it.
At least they overclocked it. None of the "normal" GPUs from Pascal forward are impressive at overclocking using ordinary consumer cancer firmware, so I am not surprised to hear that it didn't yield good results. Kind of expecting that to be the case.
I am also interested in watching to find out what kind of firmware the K|INGP|N, Galax HOF XOC and FTW3 top dog GPUs have available and how it responds to a shunt mod. Too many questions need to be answered before I blow money on a new GPU that is going to potentially leave something to be desired. If I can't overclock the crap out of it and see amazing improvement over the stock version of the product, then I'm not interested in spending money on it. Too boring to bother.Last edited: Sep 16, 2020 -
I think that is the right comparison, to a degree. The 3080 is a GA102 die. The 2080 Ti is the TU102 die. Now, the 3080 is MUCH MORE cut down than the 3090, which also uses the TU102 die. In that way, it is similar to the Titan series (as not being a cut down Ti card). But, they could always go cut down 100 die, like they did for Volta and the V100 for a Titan.
And this is where it gets muddy comparing dies generationally and by line of cards. Do you compare the die size (100, 102, 104, 106, 107, 110, 116, etc.) or do you compare the series number (xx80, xx70, xx80 Ti)? Nvidia has not used the 102 die for the 80 series really ever. They used the 110 series for the 780 and 580 series. They used the fat 100 for the 480 and 280. But other than that, they have used 104 dies. So using a 102 die now is really moving to giving something more worthwhile to the 80 series than seen for a LONG time. -
https://www.techspot.com/review/2099-geforce-rtx-3080/
This was on an AMD testbed, but the review also discussed Intel vs AMD testing:
Mr. Fox likes this. -
-
So just put an 8 on top of the 5 right?
-
electrosoft Perpetualist Matrixist
I agree they really have no choice but to compare it, but the scary thing is they are comparing what is their mid high tier card to the previous top dog and it is still laying an appreciable smack down on it stock for stock. I'm looking forward to people shunt modding the 3080's and
greater and seeing what they can pull out of them.
Yep, I agree, If I had a 2080ti, there is NO WAY I'd buy a 3080. I'd wait, gladly, to either see where Nvidia goes 3080ti wise or just pick up a 3090 and ride it out. No matter the Supers or what not last gen, the 2080ti remained supreme.
My main take away is they really have no choice but to compare what is a higher mid tier card (not mid top tier reserved for ti models or top tier = 3090) to the entire previous line including the previous top tier 2080ti because quite frankly the logical comparison 2080 vs 3080 is a complete (and welcomed) blowout in the best way.
The performance uplift on like/similar same slotted models generation to generation is astronomical (2080 vs 3080) and something we haven't seen in over 10 generations.
We can also see what a sad, sad disappointment the 20 series was in terms of generational performance uplift which just makes the 30 initially shine that much brighter.
We all know once AMD makes their move (or even if they didn't) we know a 3080ti is coming down the pipe. That glaring $999 slot is pretty obvious. We also know Nvidia will saturate their lineups down the road with 3060's, 3050's and Supers as needed to stay ahead.
If I upgrade from my 5700xt, it is running @ ~2070super levels, so I would definitely want a 3080 or greater which means if AMD only brings 2080ti performance, eh, it isn't quite worth it for me. On the other hand,
if the benchmarks aren't of their top tier BigNavi and they can manage to bring at least 3080 level of performance, now we're talking. More would be even better.
Mr. Fox likes this. -
Robbo99999 Notebook Prophet
Nice, RTX 3080 2.5x - 3.2x the performance of my GTX 1070.....a worthy upgrade, but maybe a silly price £630-£750 roughly here in the UK......I don't need it, I'm gonna hang on & read the reviews for some individual cards to pick the best of the bunch in terms of price/performance/OC'ability.....and also to see what happens to the price, although there might be some rationale that very first launch price may be cheaper than a few weeks down the line (price changing with demand)?
electrosoft, Mr. Fox and ajc9988 like this. -
Did you miss this post by me?
Seriously, you cannot just look at prior midrange and high end because if you go back far enough, you find out that we used to get 100 dies for the 80 series. The FULL FAT die design. And that wasn't some $1200 or $1500 GPU. That was a reasonable price by comparison.
Now, we have gotten 104 dies for the 80 series,dies that used to be closer to the 60 series and were on the 460, for every 80 series since MAXWELL. Only now are we getting a 102 die. Instead, we have been getting what used to be 60 and 70 series cards as 80 series for the past 6 years. AMD had nothing to change that.
Now Maxwell 980 Ti got the 100 die (actually GM200, but the failed 800 series and all, which was GM100, so). It was Pascal were the Ti got the 102 die, along with turing.
The Titan X had GM200 and the Titan X Pascal had a GP102. Titan V used a V100. Titan RTX had a TU102.
So, it isn't as simple as looking at the line of 60, 70, 80, Ti, or Titan series. They shifted dies used, levels at which they were sold, etc., throughout the past decades. Instead, you have to balance and compare BOTH the series AND the die replacement to get the full picture of the product and what the worth may be.Mr. Fox likes this. -
Exactly. If you need to know the RS location numbers on the PCB let me know. I kept photos.
-
The comparison of Turing with amps at 270 watts shows that Nvidia sacrificed much of the efficiency for the last bit of performance. We can currently only speculate as to why this was done.
The GeForce RTX 3080 cannot be overclocked well
The OC behavior indicates that the graphics card was pushed to its limit. Nvidia apparently clocks the GeForce RTX 3080 pretty close to the chip's limit ex works. This is good for the end customer, he gets the best possible performance without intervention. Those who, on the other hand, want to maximize the graphics card by hand will be less enthusiastic. Because the GA102-GPU of the test sample of the GeForce RTX 3080 FE can only be overclocked by 74 MHz and thus by 4 percent before it crashes. The graphics card then works at a maximum of 2,025 MHz in games, and 1,965 MHz at higher temperatures. However, the values are only achieved if the power limit, which has been maximized from 320 to 370 watts, allows it, which is rarely the case.
https://www.computerbase.de/2020-09.../5/#abschnitt_overclocking_von_gpu_und_gddr6x
I assume Nvidia with this tried squeeze last drop to be in front of Big Navy. If this is not enough, they will have to push out the Ti version once AMD is out with the new and shiny.
With 3080/3090 you have to accept Micron memory
Old rumor article but still interesting...
Leaked overclock benchmarks for the upcoming NVIDIA GeForce RTX 3080 Ampere graphics card were recently published. The tests showed memory clocks pushed 9 percent for a mere 2-3 percent performance gain. This raises questions about the RTX 3080’s performance gap with the RTX 3090, which delivers 936 GB/s of memory bandwidth.
https://www.notebookcheck.net/Poor-...ormance-to-the-GeForce-RTX-3090.493458.0.html
Micron Reveals GDDR6X Details: The Future of Memory, or a Proprietary DRAM? tomshardware.com
Micron uncovers history behind GDDR6X development and shares thoughts about the future of DRAM.
Last edited: Sep 16, 2020Robbo99999 and ajc9988 like this. -
Before we can say it sucks at over clocking. Just like the 2080 ti when it came out, one needs to unlock the power limit and see how the card runs at full stock speeds at that time. This has been going on for quite a long time now. The 3 series is nothing new.
Rage Set, Talon, electrosoft and 4 others like this. -
That's what happens when reviewers that know little or nothing about overclocking grab something off the shelf and expect it to set world records with a couple of mouse clicks.Rage Set, Talon, electrosoft and 2 others like this.
-
That is fair. At the same time, the question is how far past the knee on voltage to clock efficiency are they. The 3090, which is less cut down 102 die, should also give us a better idea once they are out (rumors have it overclocked pulling over 400W). But, you have to fully unlock the power to know.
That is why in my edit a couple posts back I made the point that Nvidia's card, although looking at first glance like it OCs bad, still has potential. AMD, on the other hand, has logic breakdowns much above the speed seen in the PS5 (2.23GHz). If that is true, that means AMD has a hard speed limit because of its architecture, whereas Nvidia just needs the power workarounds to see what their cards can do.
Still, seems like they may have left less on the table compared to what they usually do (although no way to know until the community gets hold of it)... -
Thats correct. But the (average) gamers haven't much OC headroom with this new move from Nvidia
Nvidia went this time, the Intel/AMD's route... Squeeze out more from the chips (less OC headroom). Overclockers will always find their way around Nvidia's shenanigans.
Last edited: Sep 16, 2020 -
The RTX3080 is a good deal that’s for sure. But, It’s actually $500 cheaper than a 2080Ti FE. And the RTX3080 is really only around 3% faster than my 2080Ti was. Most enthusiasts have their 2080Ti’s on water at 2.1Ghz or even 2,145Ghz locked in which is about 25%+ faster than a standard 2080Ti FE on air cooling.
Im sure most enthusiast with 2080Ti’s like this are gonna want the 3090, or AIB 3080’s with a 600 watt bios on water lol.
I am certain the 3080 will overclock really good once it’s cooled down and it can pull an additional 100-150 watts of juice! And it’ll still maintain that 30% lead even compared to the 2.1Ghz + 2080Ti owners out therePapusan, Mr. Fox, Talon and 1 other person like this. -
I think Ampere will be great once watercooled and more power is available to it with AIB bios, or modification to PCB. -
Same! This tiny part was literally the saving grace for the crappy NON-A 2080Ti! Maybe we could use them with Ampere GPU’s too.
Look how fast that thing was over stock performance. Luckily it had Samsung memory.
This non A 2080Ti pulled from a Aorus eGPU gamebox ran maybe 13,200 graphics score in Timespy an actual turd running stock at a 250 watt limit.. Another 4,000 points is like a generational leap and a higher Tier graphics card on top.
Mr. Fox likes this. -
electrosoft Perpetualist Matrixist
Well, you always compare stock to stock and then once the air clears OC wise, then throw OC vs OC in there. Comparing a not released 3080 which hasn't been touched by real OC'ers vs. the 2080ti that has been through the ringer for ~2yrs is never a fair comparison.
Like you surmised, in the end, it will most likely still maintain that ~30% lead over the 2080ti.
$500 cheaper is even more butter on the biscuit.
-
You will have to play with it a little bit because the behavior is dramatically altered due to the shunt mod telling lies to the vBIOS. Also, DDU and clean install the drivers if you didn't already.
Are you using a 2000W vBIOS (Kingpin or Galax XOC)? If so, try locking the voltage at 1.125V with MSI Afterburner voltage curve tool.
-
Just in case it is a different version, here's mine.
Attached Files:
-
-
I've never really figured out what the secret to doing well is for this benchmark.
-
Try disable cores. Instead push up clocks and lowest ram latency possible.
-
What does your gpu report to gpuz in watts?
Using the 2K version from what it says.
Tried using the lock voltage with curve and it still locks up. Now the curve doesn't even work.
Thanks.
This bench is based off of very high mhz.
*Official* NBR Desktop Overclocker's Lounge [laptop owners welcome, too]
Discussion in 'Desktop Hardware' started by Mr. Fox, Nov 5, 2017.