• 0 Posts
  • 46 Comments
Joined 1 year ago
cake
Cake day: July 5th, 2023

help-circle
  • You’re right, it’s not the same die, but the advanced packaging techniques that they keep improving (like the vertical stacking you mention) make for a much tighter set of specs for the raw flash storage silicon compared to what they might be putting in USB drives or NVMe sticks, in power consumption/temperature management, bus speeds/latency, form factor, etc.

    So it’d be more accurate to describe it as a system on a package (SiP) rather than a system on a chip (SoC). Either way, that carries certain requirements that aren’t present for a standalone storage package separately soldered onto the PCB, or even storage through some kind of non-soldered swappable interface.


  • Packaging flash storage onto the actual SoC SiP costs more than manufacturing the same amount of storage into an M.2 or external USB form factor, so that price can’t be directly compared. They’re making a big chunk of profit on storage upgrades, and on cloud subscriptions, but it’s not exactly cheap to give everyone 1TB of storage at that base price.


  • The problem is that there are too many separate dimensions to define the tiers.

    In terms of data signaling speed and latency, you have the basic generations of USB 1.x, 2.0, 3.x, and 4, with Thunderbolt 3 essentially being the same thing as USB4, and Thunderbolt 4 adding on some more minimum requirements.

    On top of that, you have USB-PD, which is its own standard for power delivery, including how the devices conduct handshakes over a certified cable.

    And then you have the standards for not just raw data speed, but also what other modes are supported, for information to be seamlessly tunneled through the cable and connection in a mode that carries signals other than the data signal spec for USB. Most famously, there’s the DisplayPort Alt Mode for driving display data over a USB-C connection with a DP-compatible monitor. But there’s also an analog audio mode so that the cable and port passes along analog data to or from microphones or speakers.

    Each type of cable, too, carries different physical requirements, which also causes a challenge on how long the cable can be and still work properly. That’s why a lot of the cables that support the latest and greatest data and power standards tend to be short. A longer cable might be useful, but could come at the sacrifice of not supporting certain types of functions. I personally have a long cable that supports USB-PD but can’t carry thunderbolt data speeds or certain types of signals, but I like it because it’s good for plugging in a charger when I’m not that close to an outlet. But I also know it’s not a good cable for connecting my external SSD, which would be bottlenecked at USB 2.0 speeds.

    So the tiers themselves aren’t going to be well defined.


  • The only devices that don’t have at least Thunderbolt 3 on all ports do use the Thunderbolt logo on the ones that support it, except the short-lived 12-inch MacBook (non-Pro, non-Air). Basically, for data transfer:

    • If it’s a 12-inch MacBook, the single USB-C port doesn’t support Thunderbolt, and only supports USB 3.1 Gen 1.
    • In all other devices, if the ports are unmarked, they all support Thunderbolt 3 or higher
    • If the ports are marked with Thunderbolt symbols, those ports support Thunderbolt but the unmarked ports on the same computer don’t.

    For power delivery, every USB-C port in every Apple laptop supports at least first generation USB-PD.

    For display, every USB-C port in every Apple laptop (and maybe even the desktops) supports DisplayPort alt mode.

    It’s annoying but not actually that hard to remember in the wild.


  • Everything defined in the Thunderbolt 3 spec was incorporated into the USB 4 spec, so Thunderbolt 3 and USB 4 should be basically identical. In reality the two standards are enforced by different certification bodies, so some hardware manufacturers can’t really market their compliance with one or the other standard until they get that certification. Framework’s laptops dealt with that for a while, where they represented that their ports supported certain specs that were basically identical to the USB 4 spec or even the Thunderbolt 4 spec, but couldn’t say so until after units had already been shipping.



  • Apple does two things that are very expensive:

    1. They use a huge physical area of silicon for their high performance chips. The “Pro” line of M chips have a die size of around 280 square mm, the “Max” line is about 500 square mm, and the “Ultra” line is possibly more than 1000 square mm. This is incredibly expensive to manufacture and package.
    2. They pay top dollar to get the exclusive rights to TSMC’s new nodes. They lock up the first year or so of TSMC’s manufacturing capacity at any given node, at which point there is enough capacity to accommodate other designs from other TSMC clients (AMD, NVIDIA, Qualcomm, etc.). That means you can just go out and buy an Apple device made from TSMC’s latest node before AMD or Qualcomm have even announced the lines that will be using those nodes.

    Those are business decisions that others simply can’t afford to follow.


  • The biggest problem they are having is platform maturity

    Maybe that’s an explanation for desktop/laptop performance, but I look at the mobile SoC space where Apple holds a commanding lead over ARM chips from Qualcomm, and where Qualcomm has better performance and efficiency than Samsung’s Exynos line, and I’m thinking a huge chunk of the difference between manufacturers can’t simply be explained by ISA or platform maturity. Apple has clearly been prioritizing battery life and efficiency for 10+ generations of Apple Silicon in the mobile market, and has a lead independent of its ISA, even as it trickled over to the laptop and desktop market.



  • Semiconductor manufacturing has gotten better over time, with exponential improvement to transistor density, which translates pretty directly to performance. This observation traces back to the 60’s and is commonly known as Moore’s Law.

    Fitting more transistors into the same size space required quite a few technical advancements and paradigm shifts. But for the first few decades of Moore’s law, every time they started to approach some kind of physical limit, they’d develop a completely new technique to get things smaller: photolithography moved from off the shelf chemicals purchased from photography companies like Eastman Kodak to specialized manufacturing processes, while the light used went higher and higher wavelength, with the use of new technology like lasers to get even more precisely etched masks.

    Most recently, the main areas of physical improvement has been in using extreme ultraviolet (aka EUV) wavelengths to get really small features, and certain three dimensional structures that break out from the old paradigm of stacking a bunch of planar materials on each other. Each of these breakthroughs was 20 years in the making, so the R&D and the implementation details had to be hammered out with partners in a tightly orchestrated process, to see if it would even work at scale.

    Some manufacturers recognized the huge cost and the uncertainty of success in taking stuff from academic papers in the 2000s and actually mass producing chips in 2025, so they abandoned the leading edge. Global Foundries, Micron, and a bunch of others basically decided it wasn’t worth the investment to try to compete, and now manufacture in those older nodes, without necessarily trying to compete on the newest nodes, leaving things to Intel, Samsung, and TSMC.

    TSMC managed to get EUV working at scale before Intel did. And even though Intel beat TSMC to market with certain three dimensional structures known as “FinFETs,” the next 2 generations after that, TSMC managed to really shove them in there at higher density, by using combining those FinFETs with lithography techniques that Intel couldn’t figure out fast enough. And every time Intel seemed to get close, a new engineering challenge would stifle them. And after a few years of stagnation, they went from being consistently 3 years ahead of TSMC to seeming like they’re about 2 years behind TSMC.

    On the design side of things, AMD pioneered chiplet-based design, where different pieces of silicon could be packaged together, which allowed them to have higher yields (an error in a big slab of silicon might make the whole thing worthless) and to mix and match things in a more modular way. Intel was slow to adopt that, so AMD started taking the lead in CPU performance per watt.

    It’s difficult engineering challenges, traceable back to decisions made in the past decades. Not all of the decisions were obviously wrong at the time, but nobody could’ve predicted at the time that TSMC and AMD would be able to leapfrog Intel based on these specific engineering challenges.

    Intel has a few things on the roadmap that might allow it to leapfrog the competition again (especially if the competition runs into their own setbacks). Intel is ramping up use of EUV in its current processes, are ramping up a competing three dimensional structures they call RibbonFET to compete with TSMC’s Gate All Around (both of which are supposed to replace FinFETs) and they’re hoping to beat TSMC to backside power delivery, which is going to represent a significant paradigm shift in how chips are designed.

    It’s true that in business, success begets success, but it’s also true that each new generation presents its own novel challenges, and it’s not easy to see where any given competitor might get stuck. Semiconductor manufacturing is basically wizardry, and the history of the industry shows that today’s leaders might get left behind, really quickly.



  • there are only two places in the world that have the capability of doing the super small nm scale chips: Netherlands and Taiwan.

    No, there’s only one company in the world that can make these machines: ASML in the Netherlands. TSMC, Intel, Samsung, and everyone else buy their machines from ASML, who has a monopoly on the EUV machines necessary for modern semiconductor nodes.

    These machines emit UV at the precise wavelengths necessary by very precisely generating droplets of tin, to be blasted by high powered lasers to create a highly charged plasma that emits UV, then precisely arranged reflectors to focus those beams onto silicon wafers through a mask. Even things like small changes in humidity and air pressure throw off the calibration, so the clean rooms are engineered to keep that constant no matter what the outdoor weather is, and any fab has ultra sensitive seismic detectors to anticipate seismic activity that might affect yields, and the systems have to account for the vibrations generated by human footsteps, fans and other equipment, etc.

    The level of precision necessary for current generation fabs is so far beyond any one company or any one country’s capabilities.




  • I mean, that’s kinda exactly what I said

    Yes, I’m agreeing with you and expanding on that, showing where the lines blur. Apple wants to get 30% of everything when it’s only reasonable (and supported by historical practice) to get 30% of actual purchase of software. The history of the Apple App Store is an expansion beyond the original, relatively reasonable 30% cut on that narrow category, quietly spread out to a bunch of new categories that don’t actually resemble the previous category.

    Apple knows they can’t take a 30% cut of every Uber fare or Doordash order or Amazon purchase of physical goods, and they don’t try to. It’s the categories in between where their policies start to look arbitrary.

    And now Patreon in the crosshairs shows just how twisted it’s gotten. Like I was saying, I see Patreon as something more like PayPal than, like, Netflix.


  • 30% is a reasonable cut for the distribution of software for which almost all revenue is marginal profit. When it’s a transaction for services that cost money to provide (like Uber or online shopping) or a transfer of money on behalf of someone else (think Venmo or PayPal or just a regular banking app), a 30% cut of the whole transaction doesn’t always make sense.

    Apple recognizes this and doesn’t take a 30% cut for those types of services. But they don’t always categorize things correctly. Patreon is something like PayPal, whether the app owner takes a a small cut of each transaction, so paying 30% represents a huge cut, like 10x as much as they make.

    Apple (and Google and Steam) are taking a software distribution cut for a service that more closely resembles payment processing, which is usually a 1-3% fee, not a 30% fee.



  • Facing new threats in the form of arm and risc v.

    I think the instruction set is basically irrelevant to the discussion. Intel is losing to TSMC at the actual foundry process. Intel is losing to AMD at the design of desktop/server class chips running the x86 instruction set.

    Within the ARM world, Apple is running circles around the competition. Qualcomm can’t compete on mobile SoCs, and Samsung’s Exynos is even worse. Qualcomm is trying to get into laptops, but the performance and efficiency aren’t competitive with Apple, and might not even be that far ahead of AMD.

    Intel is betting the company on various stacking and packaging technologies to fit way more stuff into a small surface area, but basically is left hoping that this works.