• wewbull@feddit.uk
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    5 months ago

    We do, depending on how you count it.

    There’s two major widths in a processor. The data register width and the address bus width, but even that is not the whole story. If you go back to a processor like the 68000, the classic 16-bit processor, it has:

    • 32-bit data registers
    • 16- bit ALU
    • 16-bit data bus
    • 32-bit address registers
    • 24-bit address bus

    Some people called it a 16/32 bit processor, but really it was the 16-bit ALU that classified it as 16-bits.

    If you look at a Zen 4 core it has:

    • 64-bit data registers
    • 512-bit AVX data registers
    • 6 x 64-bit integer ALUs
    • 4 x 256-bit AVX ALUs
    • 2 x 128-bit data bus to DDR5 (dual edge 64-bit)
    • ~40-bits of addressable physical RAM

    So, what do you want to call this processor?

    64-bit (integer width), 128-bit (physical data bus width), 256-bit (widest ALU) or 512-bit (widest register width)? Do you want to multiply those numbers up by the number of ALUs in a core? …by the number of cores on a piece of silicon?

    Me, I’d say Zen4 was a 256-bit core, but you could argue any of the above numbers.

    Basically, it’s a measurement that lost all meaning so people stopped using it.

    • Blackmist@feddit.uk
      link
      fedilink
      English
      arrow-up
      1
      ·
      4 months ago

      I gave up trying to figure out what the “bitness” of CPUs were around the time the Atari Jaguar came out and people described it as 64 bit because it had 32 bit graphics chip plus a 32 bit sound chip.

      It’s been mostly marketing bollocks since forever.

    • Buffalox@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      4 months ago

      At less than a tenth the size, this is actually a better explanation than the article. Already correcting the fact that we do at the very beginning.
      If you absolutely had to put a bit width on the Zen 4, the 2x128 bit data bus is probably the best single measure totaling 256 bit IMO.

      • wewbull@feddit.uk
        link
        fedilink
        English
        arrow-up
        1
        ·
        4 months ago

        Even then, at what point do you measure it? DDR interface is likely very much narrower than the interfaces between cache levels. Where does the core end and the memory begin?

        • Buffalox@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          4 months ago

          Yes you are 100% right, and I did consider level 3 cache as a better measure, because that allows communication between cores without the need to go through RAM, and cache generally has a high hit rate. But this number was surprisingly difficult to find, so I settled on the data bus.
          Anyways it would be absolutely fair to call it 256bit by more than one measure. But for sure it isn’t just 64 bit, because it has 512 bit instructions, so the instruction set isn’t limited to 64 bit. Even if someone was stubborn enough to claim the general instruction set is 64 bit, it has the ability to decode and execute 2 simultaneous 64 bit instructions per core, making at least 128 bit by any measure.

    • LeFantome@programming.dev
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      4 months ago

      I would say that you make a decent argument that the ALU has the strongest claim to the “bitness” of a CPU. In that way, we are already beyond 64 bit.

      For me though, what really defines a CPU is the software that runs natively. The Zen4 runs software written for the AMD64 family of processors. That is, it runs 64 bit software. This software will not run on the “32 bit” x86 processors that came before it ( like the K5, K6, and original Athlon ). If AMD released the AMD128 instruction set, it would not run on the Zen4 even though it may technically be enough hardware to do so.

      The Motorola 68000 only had a 16 but ALU but was able to run the same 32 bit software that ran in later Motorola processors that were truly 32 bit. Software written for the 68000 was essentially still native on processors sold as late as 2014 ( 35 years after the 68000 was released ). This was not some kid of compatibility mode, these processors were still using the same 32 bit ISA.

      The Linux kernel that runs on the Zen4 will also run on 64 bit machines made 20 years ago as they also support the amd64 / x86-64 ISA.

      Where the article is correct is that there does not seem to be much push to move on from 64 bit software. The Zen4 supports instructions to perform higher-bit operations but they are optional. Most applications do not rely on them, including the operating system. For the most part, the Zen4 runs the same software as the Opteron ( released in 2003 ). The same pre-compiled Linux distro will run on both.

  • hades@lemm.ee
    link
    fedilink
    English
    arrow-up
    1
    ·
    5 months ago

    We used to drive bicycles when we were children. Then we started driving cars. Bicycles have two wheels, cars have four. Eight wheels seems to be the logical next step, why don’t we drive eight-wheel vehicles?

  • ArbiterXero@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    5 months ago

    32 bit CPU’s having difficulty accessing greater than 4gb of memory was exclusively a windows problem.

    • aard@kyu.de
      link
      fedilink
      English
      arrow-up
      1
      ·
      5 months ago

      You still had a 4GB memory limit for processes, as well as a total memory limit of 64GB. Especially the first one was a problem for Java apps before AMD introduced 64bit extensions and a reason to use Sun servers for that.

  • just_another_person@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    5 months ago

    Is this a question?

    We haven’t even come close to exhausting 64-bit addresses yet. If you think the bit number makes things faster, it’s technically the opposite.

    • Cethin@lemmy.zip
      link
      fedilink
      English
      arrow-up
      1
      ·
      5 months ago

      Yeah, 64 bit handles almost all use cases we have. Sometimes we want double the precision (a double) or length (a long), but we can do that without being 128-bit. It’s harder to do half. Sure, it’d be slightly faster for some things, but it’s not significant.

    • Technus@lemmy.zip
      link
      fedilink
      English
      arrow-up
      1
      ·
      5 months ago

      We don’t even have true 64-bit addressing yet. x86-64 uses only 48 bits of a 64 bit address and 64-bit ARM can use anything between 40 and 52 depending on the specific configuration.

  • irotsoma@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    5 months ago

    Because computers have come even close to needing more than 16 exabytes of memory for anything. And how many applications need to do basic mathematical operations on numbers greater than 2^64. Most applications haven’t even exceeded the need for 32 bit operations, so really the push to 64bit was primarily to appease more than 4GB of memory without slow workarounds.

    • Justin@lemmy.jlh.name
      link
      fedilink
      English
      arrow-up
      0
      ·
      5 months ago

      Tons of computing is done on x86 these days with 256 bit numbers, and even 512-bit numbers.

      • tunetardis@lemmy.ca
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        4 months ago

        You can always combine integer operations in smaller chunks to simulate something that’s too big to fit in a register. Python even does this transparently for you, so your integers can be as big as you want.

        The fundamental problem that led to requiring 64-bit was when we needed to start addressing more than 4 GB of RAM. It’s kind of similar to the problem of the Internet, where 4 billion unique IP addresses falls rather short of what we need. IPv6 has a host of improvements, but the massively improved address space is what gets talked about the most since that’s what is desperately needed.

        Going back to RAM though, it’s sort of interesting that at the lowest levels of accessing memory, it is done in chunks that are larger than 8 bits, and that’s been the case for a long time now. CPUs have to provide the illusion that an 8-bit byte is the smallest addressible unit of memory since software would break badly were this not the case, but it’s somewhat amusing to me that we still shouldn’t really need more than 32 bits to address RAM at the lowest levels even with the 16 GB I have in my laptop right now. I’ve worked with 32-bit microcontrollers where the byte size is > 8 bits, and yeah, you can have plenty of addressible memory in there if you wanted.