• 1 Post
  • 48 Comments
Joined 3 years ago
cake
Cake day: June 12th, 2023

help-circle

  • Ridiculous that Grammarly even attempted to do this. The article was good, but at the end, though they hedged, they fell into the same trap everyone seems to. AI is not better at coding than it is at writing and their tinkering with this does not suggest that. Grammarly had a bad product, but realistically, there was likely just no effort put into this aspect of the software. Maybe I’m way off base, and I don’t support AI either way, but I just think it was a poor way to end the article. Programmers think it’s good for art, artists think it’s good for programming, it’s almost like it’s easier to see flaws in a field you’re familiar with.





  • Yang is a grifter and no one should listen to him. Companies will happily use any excuse to fire employees and create a perception of job scarcity so that they can rehire workers who are scared and desperate and willing to take less compensation for more work.

    All of that said, AI is definitely being incorporated quite heavily into a lot of products. It’s already caused issues with services we all rely on, and I hope we are able to hold companies accountable and stop patronizing them wherever possible. AI cannot do a lot of the things they are pretending it can and we are paying the price, not the companies responsible.






  • I don’t think training on all public information is super ethical regardless, but to the extent that others may support it, I understand that SO may be seen as fair game. To my knowledge though, all the big AIs I’m aware of have been trained on GitHub regardless of any individual projects license.

    It’s not about proving individual code theft, it’s about recognizing the model itself is built from theft. Just because an AI image output might not resemble any preexisting piece of art doesn’t mean it isn’t based on theft. Can I ask what you used that was trained on just a projects documentation? Considering the amount of data usually needed for coherent output, I would be surprised if it did not need some additional data.


  • If you acknowledge the problem with theft from artists, do you not acknowledge there’s a problem with theft from coders? Code intended to be fully open source with licenses requiring derivatives to be open source is now being served up for closed source uses at the press of a button with no acknowledgement.

    For what it’s worth, I think AI would be much better in a post scarcity moneyless society, but so long as people need to be paid for their work I find it hard to use ethically. The time it might take individuals to do the things offloaded to AI might mean a company would need to hire an additional person if they were not using AI. If AI were not trained unethically then I’d view it as a productivity tool and so be it, but because it has stolen for its training data it’s hard for me to view it as a neutral tool.


  • Other than being an obvious ad for their own AI, the article was pretty informative.

    Per the article, the following were found to be affected. Probably anything by the publisher should not be trusted as they’re just a data mining company, so make sure not to download any rebrands or new releases from the same people. Chrome Web Store:

    • Urban VPN Proxy - 6,000,000 users
    • 1ClickVPN Proxy - 600,000 users
    • Urban Browser Guard - 40,000 users
    • Urban Ad Blocker - 10,000 users

    Microsoft Edge Add-ons:

    • Urban VPN Proxy - 1,323,622 users
    • 1ClickVPN Proxy - 36,459 users
    • Urban Browser Guard - 12,624 users
    • Urban Ad Blocker - 6,476 users

  • Absolutely infuriating. I’m upset the judge did not award the full extent of monetary damages even though it’s evident that Verizon is in violation of multiple agreements.

    I know it’s not how this works, but since the FCC put those rules in place as a condition of their acquisition of the other companies, and since they violated those rules, the government should be able to nationalize/seize the assets of the other companies. Verizon should not legally have them since they broke the agreement. I’d love to see not just a one time fine but a legitimate punishment. If this guy hadn’t done this they’d be knowingly violating their agreement still. The people doing this are disgusting and taking advantage of the people with the least amount of time and resources. I truly wish they all have the day they deserve.




  • By getting better, I mean it will be improving on itself. I never meant to indicate that it will be better than a trained professional.

    I agree that showing ND people empathy is the best path forward, but realistically being able to socially signal empathy is a life skill and lacking that skill really only damages their own prospects. It’d be great if it didn’t make people less likely to be employable or less able to build a robust support network, but unfortunately that’s the case. Yes, ASD differences are often a reflection of how society treats people, but a demonstration of empathy is not a platitude. It’s an important way NT and lots of ND connect. If you think that the expression of empathy is difficult for people with ASD because they are more honest, then I think you might be equating lack of empathy with difficulty expressing it. There’s nothing dishonest about saying “I’m sorry that happened to you” unless you are not sorry it happened. It might not be something you would normally verbally express, but if hearing about a bad thing happening to someone doesn’t make you feel for them, then the difficulty isn’t expressing empathy, it’s lacking it. Society certainly does a lot of things for bad or nonsensical reasons, but expressing empathy generally isn’t one of them.


  • I don’t personally find the framing offensive, but I’m not on the spectrum so I can’t speak to it from that perspective. My comment was less about the article and more about not offloading that work onto unsuspecting and unprepared people.

    That being said, I’m not as anti-ai as maybe some other people might be when it comes to these kinds of tools. The study itself highlights the fact that not everyone has the resources to get the kind of high quality care they need and this might be an option. I agree that sacrificing quality for efficiency is bad, in my post history you can see I made that argument about ai myself, but realistically so many people can potentially benefit from this that would have no alternatives. Additionally, AI will only be getting better, and hopefully you’ve never had a bad experience with a professional, but I can speak from personal experience that quality varies drastically between individuals in the healthcare industry. If this is something that can be offered by public libraries or school systems, so that anyone with the need can take advantage, I think that would be a positive because we’re nowhere near universal physical healthcare, much less universal mental healthcare or actual social development training. I know people who cannot afford healthcare even though they have insurance, so if they were able to go to a specialized ai for an issue I would think it’s a net positive even if it’s not a real doctor. I know that ai is not there yet, and there’s a lot of political and social baggage there, but the reality is people need help and they need it now and they are not getting it. I don’t know how good this ai is, but if the alternative is telling people that are struggling and have no other options that they have to tough it out, I’m willing to at least entertain the idea. For what it’s worth, if I could snap my fingers and give everyone all the help and support they need and it excluded ai, I would choose that option, I just don’t have it. I also don’t know that LLMs really can do this successfully on a large scale, so I would need evidence of that before really supporting it, I just think it shouldn’t be written off completely if it’s showing promise.


  • I really don’t think a random D&D table is the place to learn to express empathy. I really wish people would stop acting like local D&D groups are a good way to learn how to socialize in general. I’m not saying you can’t learn things at the table, but the games are not actual reflections of reality and there’s a lot of go along to get along, or just run of the mill toxic group dynamics. The hobby overall can be hard for other minorities to enter, and having a table with someone still learning social skills (especially how to express empathy) and someone from a marginalized group can lead to unfortunate outcomes that your standard DM/group do not have the ability to address. It can lead one or both parties to have negative experiences that reinforce the idea they are unwelcome and leave the rest of the table with negative experiences of playing with ND people or minorities.

    Sometimes practicing first with people trained to do this is the best step, and second to that would be practicing empathy in a space where the main goal is bonding rather than another nebulous goal of having fun playing a game. I don’t know if AI is the answer, but trusting your local DM/table to be able to teach empathy is a big ask. It’s almost insulting to the people that teach this and to people with ASD. Teaching empathy can’t be as passive as it is for non-ASD people, and acting like it’s just something they are expected to pick up while also dealing with all these other elements makes it seems like you don’t think it’s something they actually have to work to achieve. I’m not on the spectrum but I have a lot of autistic friends and I would not put just any of them in a D&D situation and expect them and the rest of the table to figure it out.

    Also, generally comparing to an unaffected control is the gold standard. They did what is generally needed to show their approach has some kind of effect.


  • If you think we should offload to AI even if it’s worse, I have serious questions about your day to day life. What industry do you think could stand to be worse? Doctor’s offices? Lawyers? Mechanics? Accounts?

    The end user (aka the PEOPLE NEEDING A SERVICE) are the ones getting screwed over when companies offload to AI. You tell AI to schedule an appointment tomorrow, and 80% of the time it does and 20% it just never does or puts it on for next week. That hurts both the office trying to maximize the people seen/helped and the person that needs the help. Working less hours due to tech advancement is awesome, but in reality offloading to AI in the current work climate is not going to result in working less hours. Additionally, how costly is each task the AI is doing? Are the machines running off of renewables, or is using this going to contribute to worse air quality and worse climate outcomes for people you’re trying to save from working more. People shouldn’t have to work their lives away, but we have other problems that need to be solved before prematurely switching to AI.