1. DLP Flash Christmas Competition + Writing Marathon 2024!

    Competition topic: Magical New Year!

    Marathon goal? Crank out words!

    Check the marathon thread or competition thread for details.

    Dismiss Notice
  2. Hi there, Guest

    Only registered users can really experience what DLP has to offer. Many forums are only accessible if you have an account. Why don't you register?
    Dismiss Notice
  3. Introducing for your Perusing Pleasure

    New Thread Thursday
    +
    Shit Post Sunday

    READ ME
    Dismiss Notice

Xeon vs. I7 - Educate me, please

Discussion in 'Tech Support' started by Joe's Nemesis, Jun 24, 2016.

  1. Joe's Nemesis

    Joe's Nemesis High Score: 2,058 ~ Prestige ~

    Joined:
    Jan 29, 2012
    Messages:
    1,191
    High Score:
    2,058
    What I'm Doing:
    I'm in the first planning stages (probably 2 years out) of building a workstation to stream live video from multiple cameras (with live editing). The camera feeds will also be recorded and a video produced as well. On the audio side, the live stream will get the house mix, but the video later produced will have audio produced from a DAW. So, there'll be quite a bit of rendering as well as live video producing.
    What I'm looking at:
    Since I'm so far out from the actual build, I'm just using today's chips to get a handle on my choices. So, if I were building today, I'm looking at either going with an I7 6950X 10-core paired with 2 GeForce GTX 1080 GPUs, or a Xeon chip with a similar set up (depending on answers).
    Other parts of my build:
    I'll also have a DeckLink Quad 2 8-Channel 3G-SDI Capture and Playback Card and a PCI-e SSD (240 GB) as a OS card (two 1 TB HDs for storage). Of course, other accessories are included, like CD/DVD burners, PSU (Corsair 1000w), bluray burner (to make hardcopy masters), 32 Gb ram, etc. Haven't settled on a board yet, but again, it's too far out to worry about specifics, and it's dependent on answers here. I'm very aware of Lane sharing issues with certain boards, but honestly, I probably only need two lanes to run at x16 unless I'm going to get a third card, which seems (even more?) ridiculous.
    (I'm estimating the cost of the build to be around 6-8k).
    So, could someone educate me on . . .
    1. Benefits
    I know the Xeon supports ECC memory, has tremendous up-time, and (I believe) more lanes and cache, but is it really better all around, or just in limited enviroments (such as industry servers)? (Huh, guess I just answered my question - besides this brick being gold, is there anything else that makes it better than my clay brick?)
    2. Real-world results
    And related to (1), is going with the Xeon overkill/unnecessary, or will I see a real-time difference for what I want to do with it?
    3. Dual chips vs. one
    Looking specifically at the chips in comparison, where would I fall in the law of diminishing returns if I got a dual CPU board with two Xeon processors as compared to the I7 (figure my budget is $600-$1000 per processor).
    4. The reality of assigning cores
    Am I right in thinking that having so many cores will enable me to assign recording live sound from a mixer through a usb to one core set (say, 1 physical core) and then assign the rest of the cores for the live processing/streaming without hesitations or disrupts in the signal? (I'm asking if assigning it this way will give me better quality in both processes, rather than if I can assign cores).​


    I appreciate the help in advance.
     
    Last edited: Jun 24, 2016
  2. Ched

    Ched Da Trek Moderator DLP Supporter ⭐⭐

    Joined:
    Jan 6, 2009
    Messages:
    8,379
    Location:
    The South
    I don't know enough to answer this, sadly. However I do sometimes see people asking similar questions (about i7 versus Xeon) in the Intel Processors subforum over on Hardforum.

    They can get a little pissy when you ask about things this far in advance though, so ymmv.
     
  3. kaleironfist

    kaleironfist Third Year

    Joined:
    Mar 7, 2008
    Messages:
    80
    Location:
    Australia
    At two years out, nobody can give good advice on hardware because that is at least one product cycle away, potentially two and that's just looking at Intel.

    The 6950X is Intel trolling everyone. It makes no sense at all at its current price point given the rest of their lineup. If you need high clock frequency, there's the cheaper 6850K or even 6900K. If you need cores, there's the 2P (or more) Xeon route.

    The only things you have that would 'need' x16 are graphics cards (you're not using them for compute, so they don't). The PCIe SSD uses only 4 lanes, and your capture card uses only 8. You won't have any issues on this front.

    Xeons don't have more up-time, lanes, or cache (per core anyway), relative to consumer i7s since HEDT i7s are just cut down Xeons. The biggest benefit to going Xeon is in ECC memory use and core count. You'll also be able to benefit from multiple CPUs in a single motherboard.

    You'll see a difference if there is a difference between the CPUs and if your software can harness the processing power. Some software have limitations on how much it can use so you'll need to research this before even thinking of buying anything.

    As I said above in regards to the 6950X, you'd be getting greater value for money with dual Xeons. Video processing threads very easily and you can find Xeons with varying core counts at varying prices. At some point you'll hit diminishing returns, but your usage scenario sounds like it'll hit budget limits first.

    I think you're after low DPC (deferred procedure call) latency instead of dedicating cores to certain tasks. DPC latency is difficult to know beforehand, as it's dependent on the motherboard and its implementation. In general fewer third party chips/drivers means lower DPC latency, but it's not always true.
     
  4. Effulgent Dawn

    Effulgent Dawn First Year

    Joined:
    Oct 15, 2015
    Messages:
    23
    Location:
    U.S.
    High Score:
    0
    Depending on what kind of video editing you are doing, you may want to up the RAM to 64GB. Some kinds of motion graphics (eg. After Effects) will actually be able to use it, especially if you are working in 4K or live mixing at 4k resolutions. You will also likely want to get a separate SSD for an audio/video cache during edits.
     
  5. Joe's Nemesis

    Joe's Nemesis High Score: 2,058 ~ Prestige ~

    Joined:
    Jan 29, 2012
    Messages:
    1,191
    High Score:
    2,058
    kaleironfist - thanks for taking the time to type all that out. I asked a few questions below, not to argue, but to help expand my knowledge because you are definitely more knowledgeable than I am in this area.

    Yep. Again, this is more so I can start developing a reference point between the two lines. Once that happens, it's much easier for me to keep up with the similarities and differences. IOWs, I'm trying to shallow my learning curve I'll be facing in two years.

    I thought about those, but I did choose the 6950X because of lane availability for 2 cards. I know most things'll never need a x16 lane, but this build needs to be expandable years into the future (as much as possible among electronics). I'm usually pretty lucky in that I've gotten around 5 years out of my chips - but that's because I buy top end and then run the hell out of them (just traded out my I7-940 about a year ago).

    Unless, of course, I come across the need for a third card (already started rendering stuff at home, but its small right now and my OCed 4790k is handling it well for now).

    Where's the industry at in creating more hardware that can use the x16 lanes? I remember, for instance, my first modern computer had this brand new thing called a USB plug. I never used it. It took 3-4 years before anything was made that could use it in a way I needed.

    I'm wondering if that'll repeat for the multiple x16 lane chips/boards.

    Interested in why you say they don't have more up-time. I've ready quite a bit that says they do (and I always believe what I read on the internet). I'm started to wonder if they're referring to better stability due to the ECC memory.


    That's a very good point. On the other hand, if you're running multiple programs at the same time, can't you harness the processing power by making sure programs aren't sharing cores? (part of my question you answered below).

    That's very helpful. (Really, your entire post is, but this is kind of the brass tax of the discussion).


    Huh. getting to the upper portions of my computer knowledge now. I follow what you're saying, but never thought about it in context to what I'm doing here, or ways of getting it lower. I'll have to do a little research on that in my spare times. Thanks!

    Good to know. I'm used to seeing a third of my 16 Gbs being used on my home system. I figured 32 would be enough, and probably will be. But you make a good point about checking specifics - especially as software develops over the next 24 months.
     
  6. kaleironfist

    kaleironfist Third Year

    Joined:
    Mar 7, 2008
    Messages:
    80
    Location:
    Australia
    At this level of performance, all of them have 40 lanes, which can be split into five lots of x8 no problem. More than likely, you'll run into physical space issues, as most graphics cards are dual slot, leaving you with six used slots out of seven or eight, depending on case and motherboard. If you pick up single slot variants (which are rare and expensive), you'll have more space to work with.

    Literally not an issue. Your graphics cards will be downgraded to x8 instead of x16 and will lose approximately 1% performance. Part of the issue is that the PCI Express standard has been improving, the other part is in the algorithms used - the more you have to use the CPU, the less GPUs are used which means less lane usage isn't an issue. If you were to actually use a lot of DP compute you might have a valid point, but Nvidia has extremely poor DP compute and AFAIK video processing doesn't use it. I might be wrong about that but I've found that just having a decent graphics card is enough.

    Right in one. Xeons by themselves don't have better uptime, but paired with ECC memory and enterprise gear that have been tested the hell out of, there's less instability to cause downtime.

    You're always sharing cores, the OS itself will determine which cores are best to use in any scenario, and threads will move between available cores. Regardless, if your software has a maximum limitation, you're right in that you can be using multiple software simultaneously to get the most out of a high core count system. There is almost no reason to force software to not share cores unless you don't trust the vendor to validate their CPUs (at that point, why would you buy them?).

    It's rare these days to worry that much about audio buffers running out and most haven't had to think about DPC latency in a long time as every vendor has made steps to reduce this to a level you won't notice under general usage. Your use scenario is a few steps above that so I can't really give good advice other than keep it as low as possible. You absolutely need it below 500 microseconds but modern consumer boards (search for DPC latency) can get pretty damn low.
     
  7. Perspicacity

    Perspicacity Destroyer of Worlds ~ Prestige ~ DLP Supporter

    Joined:
    Nov 27, 2007
    Messages:
    1,022
    Location:
    Where idiots are not legally permitted to vote
    High Score:
    3,994
    With supercomputer acquisitions based on MIC designs, it's not unheard of for vendors to downgrade core counts from the number of physical cores extant on the chips. Better to deliver 30 "reliable" cores, e.g., than to sell a chip as having 32 cores, but one of them flaky. (One always hopes they're reliable as delivered from the factory, but a non-negligible fraction always get pulled and replaced after factory testing and "burn-in.") In other words, even straight from the factory, not all cores on a MIC chip are equal.

    Along those lines, it's off topic for this thread (as I doubt EC will ever need to worry about this for his applications), but for many high performance applications, memory bandwidth and NUMA issues do require some sort of binding for optimal performance. OS support for NUMA optimization, HBM utilization, etc. is improving, but still inconsistent.
     
  8. Lord Ravenclaw

    Lord Ravenclaw DLP Overlord Admin DLP Supporter

    Joined:
    Apr 2, 2005
    Messages:
    4,372
    Location:
    Denver, CO
    Many Xeons also have considerably more L2/L3 cache. This indirectly improves performance by keeping more out of RAM and closer to the cores.
     
  9. bob99

    bob99 High Inquisitor

    Joined:
    Aug 14, 2011
    Messages:
    533
    I'm not sure what to say about work applications ability to use multiple cores. The usefulness of a multi-core cpu and the gpu really depend on what specific programs you want to run and what your hardware setup is. And the answer might change once new programs come out.

    Out of curiosity, what are you wanting to use this for? It sounds like a beefy setup for rendering/streaming.
     
    Last edited: Jul 12, 2016
  10. Joe's Nemesis

    Joe's Nemesis High Score: 2,058 ~ Prestige ~

    Joined:
    Jan 29, 2012
    Messages:
    1,191
    High Score:
    2,058
    Real time video editing and streaming (including camera switching, graphics, etc etc), DAW work, rendering, blah blah blah.
     
Loading...