Migrated from rainynight65@feddit.de, which now appears to be dead. Sadly lost my comment history in the process. Let’s start fresh.

  • 0 Posts
  • 13 Comments
Joined 4 months ago
cake
Cake day: June 24th, 2024

help-circle
  • Good on this kid for going to such lengths to verify his hypothesis and show a serious weakness in railway infrastructure. I hope he goes on to become a serious railway enthusiast and advocate for safe, efficient rail.

    However, there are way too many factors in the number of derailments and safety incidents in US rail operations to pin them down to this one issue. Once the major operators embarked on a journey to squeeze more and more money out of the business, a lot of things happened. Trains became longer - excessively so. Used to be that a train 1.4 miles long was considered massive. These days they are the norm. Can you imagine a train so long that, in hilly terrain, sections of it are being dragged uphill while other sections are pushing downhill?

    Reductions in staff, motive power fleets and maintenance have led to trains being badly composed, with loads being distributed in a less than optimal way. An old railway man once told me that the only time he broke a train was when he, in a rush and under pressure, agreed to attach a rake of fully loaded freight cars to the end of a train of empties. Unequal load distribution played a role in a number major derailment incidents, among them a derailment in Hyndman, PA, which required the town to be evacuated for several days.

    ProPublica have a series of articles regarding rail safety, and specifically one about the dangers of long trains. So while the worn out springs certainly don’t help, they are only one of many things that are impacting rail safety, and probably not even the lowest hanging fruit.




  • Equally then, the nuclear disasters shouldn’t count, right?

    Deaths from an accident at an active nuclear power plant are not the same as deaths caused by a burst dam that was originally intended to produce electricity one day, but has never produced any. Especially if you call the statistic ‘Deaths per unit of electricity production’. At the time of the accident, it was just a dam, construction of any hydroelectric facilities was nowhere near beginning, so calling it a ‘hydropower accident’ is highly debatable (probably as at least as debatable as calling nuclear ‘conventional’). Without the inclusion of those deaths, hydro would be shown to be even safer than nuclear, given that it has produced nearly twice as much electricity in the time span covered by those statistics, while having caused a similar number of deaths (if you continue to ignore the increased miner mortality, otherwise nuclear will look way worse). The article also does not cite how they determined the number of 171000 deaths, given that estimates for the Banqian dam failure range between 26000 and 240000. The author mentions (but does not cite) a paper by Benjamin Sovacool from 2016, which analyzes the deaths caused by different forms of energy but, crucially, omits the Banqian dam death toll. I will try to get hold of that paper to see the reasoning, but I suspect it may align with mine.

    How do you assume it’s ignoring their increased mortality?

    The article makes zero mention of any such thing, and the section about how the deaths are calculated (footnote 3 in this section) only calls out the deaths from Chernobyl and Fukushima. Direct quote from the footnote:

    Nuclear = I have calculated these figures based on the assumption of 433 deaths from Chernobyl and 2,314 from Fukushima. These figures are based on the most recent estimates from UNSCEAR and the Government of Japan. In a related article, I detail where these figures come from.

    No mention at all of any other deaths or causes of death, nothing whatsoever. It’s the deaths from two nuclear accidents, that’s all. The figures from the cited study alone would multiply the number of nuclear deaths in this statistic. What’s worse, the author has published another article on nuclear energy which essentially comes to the exact same conclusions. But if you include deaths from a burst dam that has never produced electricity (but was planned to do so eventually), then you must include deaths among people who mine the material destined to produce electricity in a nuclear plant.

    To me it simply looks like the author of this article is highly biased towards nuclear, and has done very selective homework.






  • Sure, training data selection impacts the output. If you feed an AI nothing but anime, the images it produces will look like anime. If all it knows is K-pop, then the music it puts out will sound like K-pop. Tweaking a computational process through selective input is not the same as a human being actively absorbing stimuli and forming their own, unique response.

    AI doesn’t have an innate taste or feeling for what it likes. It won’t walk into a second hand CD store, browse the boxes, find something that’s intriguing and check it out. It won’t go for a walk and think “I want to take a photo of that tree there in the open field”. It won’t see or hear a piece of art and think “I’d like to be learn how to paint/write/play an instrument like that”. And it will never make art for the sake of making art, for the pure enjoyment that is the process of creating something, irrespective of who wants to see or hear the result. All it is designed to do is regurgitate an intersection of what it knows that best suits the parameters of a given request (aka prompt). Actively learning, experimenting, practicing techniques, trying to emulate specific techniques of someone else - making art for the sake of making art - is a key component to humans learning from others and being influenced by others.

    So the process of human learning and influencing, and the selective feeding of data to an AI to ‘tune’ its output are entirely different things that cannot and should not be compared.


  • Generative AI is not ‘influenced’ by other people’s work the way humans are. A human musician might spend years covering songs they like and copying or emulating the style, until they find their own style, which may or may not be a blend of their influences, but crucially, they will usually add something. AI does not do that. The idea that AI functions the same as human artists, by absorbing influences and producing their own result, is not only fundamentally false, it is dangerously misleading. To portray it as ‘not unethical’ is even more misleading.