AmbitiousProcess (they/them)

  • 0 Posts
  • 46 Comments
Joined 7 months ago
cake
Cake day: June 6th, 2025

help-circle

  • The problem is, it’s not unobtrusive.

    When I right click and I instantly get an option silently added to the list that sends data to an AI model hosted somewhere, which I’ve accidentally clicked due to muscle memory, it’s not good just because there’s also the option there to disable it. When I start up my browser after an update and I am instantly given an open sidebar asking me to pick an AI model to use, that’s obtrusive and annoying to have to close and disable.

    Mozilla has indicated they do not want to make these features opt-in, but opt-out. The majority of Mozilla users do not want these features by default, so the logical option is to make them solely opt-in. But Mozilla isn’t doing that. Mozilla is enabling features by default, without consent, then only taking them away when you tell them to stop.

    The approach Mozilla is taking is like if you told a guy you weren’t interested in dating him, but instead of taking that as a “no.” he took it as a “try again with a different pickup line in 2 weeks” and never, ever stopped no matter what you tried. It doesn’t matter that you can tell him to go away now if he’ll just keep coming back.

    Mozilla does not understand consent, and they are violating the consent of their users every time they push an update including AI features that are opted-in by default.


  • Because google only pays Mozilla because of:

    • Maintaining search dominance
    • Preventing anti-monopoly scrutiny

    They don’t want Mozilla to compete in any AI space, because there’s already a ton of competition in the AI space given how much money gets thrown around, so they don’t benefit from anti-monopoly efforts, and there’s so many models that they don’t benefit from search dominance in the AI space. They’d much rather have Mozilla stay a non-AI browser while they get to implement AI features and show shareholders that they’re “the most advanced” of them all, or that “nobody else is doing it like we do”.



  • Videos, images, and text can absolutely compel action or credible harm.

    For example, Facebook was aware that Instagram was giving teen girls depression and body image issues, and subsequently made sure their algorithm would continue to show teen girls content of other girls/women who were more fit/attractive than them.

    the teens who reported the most negative feelings about themselves saw more provocative content more broadly, content Meta classifies as “mature themes,” “Risky behavior,” “Harm & Cruelty” and “Suffering.” Cumulatively, such content accounted for 27% of what those teens saw on the platform, compared with 13.6% among their peers who hadn’t reported negative feelings.

    https://www.congress.gov/117/meeting/house/114054/documents/HHRG-117-IF02-20210922-SD003.pdf

    https://www.reuters.com/business/instagram-shows-more-eating-disorder-adjacent-content-vulnerable-teens-internal-2025-10-20/

    Many girls have committed suicide or engaged in self harm, at least partly inspired by body image issues stemming from Instagram’s algorithmic choices, even if that content is “just videos, and images.”

    They also continued to recommend dangerous content that they claimed was blocked by their filters, including sexual and violent content to children under 13. This type of content is known to have a lasting effect on kids’ wellbeing.

    The researchers found that Instagram was still recommending sexual content, violent content, and self-harm and body-image content to teens, even though those types of posts were supposed to be blocked by Meta’s sensitive-content filters.

    https://time.com/7324544/instagram-teen-accounts-flawed/

    In the instance you specifically highlighting, that was when Meta would recommend teen girls to men exhibiting behaviors that could very easily lead to predation. For example, if a man specifically liked sexual content, and content of teen girls, it would recommend that man content of underage girls attempting to make up for their newly-created body image issues by posting sexualized photos.

    They then waited 2 years before implementing a private-by-default policy, which wouldn’t recommend these teen girls’ accounts to strangers unless they explicitly turned on the feature. Most didn’t. Meta waited that long because internal research showed it would decrease engagement.

    By 2020, the growth team had determined that a private-by-default setting would result in a loss of 1.5 million monthly active teens a year on Instagram, which became the underlying reason for not protecting minors.

    https://techoversight.org/2025/11/22/meta-unsealed-docs/

    If I filled your social media feed with endless posts specifically algorithmically chosen to make you spend more time on the app while simultaneously feeling worse about yourself, then exploited every weakness the algorithm could identify about you, I don’t think you’d look at that and say it’s “catastrophizing over videos, images, text on a screen that can’t compel action or credible harm” when you develop depression, or worse.


  • This whole article is just a condescending mess.

    “Why does everyone who has been repeatedly burned by AI, time and time again, whether that be through usable software becoming crammed full of useless AI features, AI making all the information they get less reliable, or just having to hear people evangelize about AI all day, not want to use my AI-based app that takes all the fun out of deciding where you go on your vacation???”

    (yes, that is actually the entire proposed app. A thing where you say where you’re going, and it generates an itinerary. Its only selling point over just using ChatGPT directly is that it makes sure the coordinates of each thing are within realistic travel restrictions. That’s it.)


  • And it’s more expensive than the most expensive US mobile plan, which would have faster speeds, whereas Trump Mobile’s drops off after a certain (lower than T-Mobile’s own plans) amount of GB data usage since they’re solely using T-Mobile as an MVNO, and also has deprioritized data speeds during periods of network congestion.

    It would also get you the ability to switch underlying network providers if you’re in a dead zone, international calling and data in more locations, better customer support given all the experiences we’ve seen from reviewers, and unlimited hotspot data, plus better bundle deals for families or people with smart watches that need separate data.

    Hell, even T-Mobile’s own own plans, which are usually substantially more expensive than other companies they solely act as an MVNO for, like Mint Mobile, (which is actually owned by T-Mobile now) which will get you the same value as T-Mobile’s $50/mo plan in a $30/mo plan that is just $15/mo for new users for up to a 12 month period.

    Trump Mobile is just $2.55 cheaper than T-Mobile’s $50 plan.


  • It runs autonomously to a degree, but a lot of these sites operate via posting a wide variety of content on the same domains, after those domains have previously gained status in search engines.

    So for example, you’ll have a site like epiccoolcarnews[.]info hosting stuff like “How to get FREE GEMS in Clash of Clans” just because previously they posted an article about cars that Google thought was good so they ranked up the domain in their ranking algorithm.

    Permanently downrank the domain, and eventually they have to start with a new domain that, as is the key part here, has no prior reputation, and thus has to work to actually get ranked up in search again.

    They’re also going to be making this a public database, and have said they’ll use it to train AI-generated content detection tools that will probably be better at detecting “AI generated articles meant to appear legitimate by using common keywords and phrases”, rather than just “any text of any form that has been generated by AI” like other AI detection tools do, which would make them capable of automating the process a bit with regard to specifically search engines.




  • The study claims that they analyzed participants’ labor market outcomes, that being earnings and propensity to move jobs, “among other things.”

    Fun fact, did you know white men tend to get paid more than black men for the same job, with the same experience and education?

    Following that logic, if we took a dataset of both black and white men, then used their labor market outcomes to judge which one would be a good fit over another, white men would have higher earnings and be recommended for a job more than black people.

    Black workers are also more likely to switch jobs, one of the reasons likely being because you tend to experience higher salary growth when moving jobs every 2-3 years than when you stay with a given company, which is necessary if you’re already being paid lower wages than your white counterparts.

    By this study’s methodology, that person could be deemed “unreliable” because they often switch jobs, and would then not be considered.

    Essentially, this is a black box that gets to excuse management saying “fuck all black people, we only want to hire whites” while sounding all smart and fancy.




  • Oh, of course the legislation is to blame for a lot of this in the end. I’m just saying that Discord could have already partnered with a number of identity verification services that do already have this infrastructure up and running, with standardized and documented ways to call their APIs to both verify and check the verification of a user.

    At the end of the day, Discord chose to implement a convoluted process of having users email Discord, upload IDs, then have Discord pull the IDs back down from Zendesk and verify them, rather than implementing a system where users could have simply gone to a third-party verification website, done all the steps there, had their data processed much more securely, then have the site just send Discord a message saying “they’re cool, let 'em in”


  • In my opinion, they’re still somewhat at fault, because this was them failing to find and configure their software to work with a third-party identity provider who’s infrastructure was built to handle the security of sensitive information, and just choosing to use email through Zendesk because it was easier in the meantime. A platform that I should note has been routinely accessed again and again by attackers, not just for Discord, but for all sorts of other companies.

    The main problem is that legislation like the Online Safety Act require some privacy protections, like not collecting or storing certain data unless necessary, but they don’t require any particular security measures to be in place. This means that, theoretically, nothing stops a company from passing your ID to their servers in cleartext, for example.

    Now compare this to industries like the credit card industry, where they created PCI DSS, which mandates specific security practices. This is why you don’t often see breaches of any card networks or issuers themselves, and why most fraud is external to the systems that actually process payments through these cards. (e.g. phishing attacks that get your card info, or a store that has your card info already getting hacked)

    This is a HUGE oversight, and one that will lead to things like this happening over and over unless it becomes unprofitable for companies to not care.