• 1 Post
  • 33 Comments
Joined 2 years ago
cake
Cake day: June 12th, 2023

help-circle

  • By getting better, I mean it will be improving on itself. I never meant to indicate that it will be better than a trained professional.

    I agree that showing ND people empathy is the best path forward, but realistically being able to socially signal empathy is a life skill and lacking that skill really only damages their own prospects. It’d be great if it didn’t make people less likely to be employable or less able to build a robust support network, but unfortunately that’s the case. Yes, ASD differences are often a reflection of how society treats people, but a demonstration of empathy is not a platitude. It’s an important way NT and lots of ND connect. If you think that the expression of empathy is difficult for people with ASD because they are more honest, then I think you might be equating lack of empathy with difficulty expressing it. There’s nothing dishonest about saying “I’m sorry that happened to you” unless you are not sorry it happened. It might not be something you would normally verbally express, but if hearing about a bad thing happening to someone doesn’t make you feel for them, then the difficulty isn’t expressing empathy, it’s lacking it. Society certainly does a lot of things for bad or nonsensical reasons, but expressing empathy generally isn’t one of them.


  • I don’t personally find the framing offensive, but I’m not on the spectrum so I can’t speak to it from that perspective. My comment was less about the article and more about not offloading that work onto unsuspecting and unprepared people.

    That being said, I’m not as anti-ai as maybe some other people might be when it comes to these kinds of tools. The study itself highlights the fact that not everyone has the resources to get the kind of high quality care they need and this might be an option. I agree that sacrificing quality for efficiency is bad, in my post history you can see I made that argument about ai myself, but realistically so many people can potentially benefit from this that would have no alternatives. Additionally, AI will only be getting better, and hopefully you’ve never had a bad experience with a professional, but I can speak from personal experience that quality varies drastically between individuals in the healthcare industry. If this is something that can be offered by public libraries or school systems, so that anyone with the need can take advantage, I think that would be a positive because we’re nowhere near universal physical healthcare, much less universal mental healthcare or actual social development training. I know people who cannot afford healthcare even though they have insurance, so if they were able to go to a specialized ai for an issue I would think it’s a net positive even if it’s not a real doctor. I know that ai is not there yet, and there’s a lot of political and social baggage there, but the reality is people need help and they need it now and they are not getting it. I don’t know how good this ai is, but if the alternative is telling people that are struggling and have no other options that they have to tough it out, I’m willing to at least entertain the idea. For what it’s worth, if I could snap my fingers and give everyone all the help and support they need and it excluded ai, I would choose that option, I just don’t have it. I also don’t know that LLMs really can do this successfully on a large scale, so I would need evidence of that before really supporting it, I just think it shouldn’t be written off completely if it’s showing promise.


  • I really don’t think a random D&D table is the place to learn to express empathy. I really wish people would stop acting like local D&D groups are a good way to learn how to socialize in general. I’m not saying you can’t learn things at the table, but the games are not actual reflections of reality and there’s a lot of go along to get along, or just run of the mill toxic group dynamics. The hobby overall can be hard for other minorities to enter, and having a table with someone still learning social skills (especially how to express empathy) and someone from a marginalized group can lead to unfortunate outcomes that your standard DM/group do not have the ability to address. It can lead one or both parties to have negative experiences that reinforce the idea they are unwelcome and leave the rest of the table with negative experiences of playing with ND people or minorities.

    Sometimes practicing first with people trained to do this is the best step, and second to that would be practicing empathy in a space where the main goal is bonding rather than another nebulous goal of having fun playing a game. I don’t know if AI is the answer, but trusting your local DM/table to be able to teach empathy is a big ask. It’s almost insulting to the people that teach this and to people with ASD. Teaching empathy can’t be as passive as it is for non-ASD people, and acting like it’s just something they are expected to pick up while also dealing with all these other elements makes it seems like you don’t think it’s something they actually have to work to achieve. I’m not on the spectrum but I have a lot of autistic friends and I would not put just any of them in a D&D situation and expect them and the rest of the table to figure it out.

    Also, generally comparing to an unaffected control is the gold standard. They did what is generally needed to show their approach has some kind of effect.


  • If you think we should offload to AI even if it’s worse, I have serious questions about your day to day life. What industry do you think could stand to be worse? Doctor’s offices? Lawyers? Mechanics? Accounts?

    The end user (aka the PEOPLE NEEDING A SERVICE) are the ones getting screwed over when companies offload to AI. You tell AI to schedule an appointment tomorrow, and 80% of the time it does and 20% it just never does or puts it on for next week. That hurts both the office trying to maximize the people seen/helped and the person that needs the help. Working less hours due to tech advancement is awesome, but in reality offloading to AI in the current work climate is not going to result in working less hours. Additionally, how costly is each task the AI is doing? Are the machines running off of renewables, or is using this going to contribute to worse air quality and worse climate outcomes for people you’re trying to save from working more. People shouldn’t have to work their lives away, but we have other problems that need to be solved before prematurely switching to AI.


  • I’m a woman and not on tinder, but I don’t know why people don’t like this. Anyone listing a height preference is not the kind of person you should be looking for, especially if you don’t fit their preference imho. It’s literally self filtering, though it did say it’s not fully blocking or anything.

    I know women who would’ve loved that feature and I would never suggest any of my friends date them. Even after they dated guys that didn’t fit the criteria and amicably split, they still held firm to the idea. I think it’s ok to have preferences, but this is dumb to filter for and people are dumb to want to match with these people.


  • Part of it is likely that she is a famous woman who is not known for being sexualized and is considered a public figure. No one wants to have the scientific standard be “I used pics of this girl I had a crush on” so I imagine famous people are good to pick from. I imagine Merkel also doesn’t have a lot of bikini pics AI can draw from (some amount of swimming pics are unfortunately always available for public figures for some reason) so you can be sure it’s generated them from nothing. If you used a famous model, there may be a possibility it is using pics of them to model their chest. If you’re testing what it does with random inputs, using Merkel is probably a good option.

    As far as the output being what was requested, I think the issue can vary depending on your view of AI so I am just going to leave that part unanswered because if it’s a problem or not relies so much on your priors.


  • Regarding the post title, I didn’t mean to indicate it being your decision, just that the title in general feels misleading, seeing as no argument is presented and it purports to be about “why” you should do something.

    I don’t really know a lot about the difference between instances, so I can’t really opine on that. From what I’m seeing, it does look as though people are having a discussion, they just don’t seem to agree with the idea as presented.

    I’m not saying it has to be longer format. I’m saying no argument is made. She could have gave ai the same prompt but with a please and compared the results. I have done so below. She could have used the irrelevant middle section to just display screenshots.

    It’s fine if you think people should be polite to ai. I think that’s great, but if you feel that way regardless of effect on response quality then this video doesn’t really make that point either. I think a post saying that and offering your personal opinions might’ve prompted more discussion on that topic rather than just disagreement with a video that doesn’t make that argument.

    Here are two prompt comparisons. There is imho no difference in quality when saying please. The same results can be seen regardless of prompt length or how polite I was, I just chose short examples that could have been used in a video if they actually illustrated her point.

    List five animals

    Here are five animals:

    1. Lion
    2. Octopus
    3. Koala
    4. Penguin
    5. Jaguar

    List five animals please

    Here are five animals:

    1. Tiger
    2. Dolphin
    3. Raccoon
    4. Falcon
    5. Elephant

    Create a limerick about being polite

    There once was a person so fine, Whose manners were truly divine. They’d say “please” with a grin, And “thank you” within, And their politeness was truly sublime.

    Create a limerick about being polite please

    There once was a person so bright, Whose politeness was a delight. They’d hold doors with care, And listen with flair, And their kindness was always in sight.


  • What do you mean you don’t buy my argument? My argument that it is more detrimental to workers than owners? Then I’m not sure you understand capitalism, because money in workers pockets is inherently more impactful than money in a capitalists bank account. It’s a drop in the ocean for large corporations, but can be the difference between food or housing or healthcare to a worker. The company would be spending that money anyways, so it’s already accounted for. The nominal cost of recruitment is just going to come from a salary at the end of the day. All it does is serve to incentivize companies adding extra hoops to the hiring process and potentially screening out real people or causing extra stress/work to apply. This will not discourage workers from applying, since you know, the threat of capitalism still looms large and worker protections are low and are being dismantled day by day.

    If you’re not coming from an anti capitalist place, then you’re right, I don’t get it.



  • I think the issue is the post title. If the title was “role-based prompt engineering” you probably wouldn’t have gotten as many comments and certainly not as many disagreeing. She says she’s going to make a case for using please, and then fails to provide any actual examples of that. Pointing that out isn’t sanctimonious, nor does it mean people are being rude to AI. If you want to make a moral argument for it go ahead, but it seems like she’s attempting to propose a technical argument and then just doesn’t. For what it’s worth, I generally try and leave out superfluous words from prompts, in the same way that googling full sentences was previously less likely to result in a good answer than just key words. AI is not human. It’s a tool. If being rude to it ensured it would stop hallucinating, I don’t think it’d make you a bad person if you were rude to it.

    There’s a comment here talking about antisocial behavior in gaming, and imho, if you without hesitation kick a dog in a video game, I’m not sure I’d view you the same way after. Plenty of people talk about how they struggle to do evil play throughs because they don’t like using rude options for npcs. Not saying please to AI doesn’t make you a psychopath.


  • She didn’t make that point at all. She starts with “not because of the robot apocalypse” meanders in the middle about ‘prompt engineering’ aka telling ai what manner you want it to respond in - Shakespearean, technical, encyclopedic - (yea, we know) then ends with “it’s better to be polite”. It’s clickbait. She literally does not address why saying please is important outside of the last sentence where she said it’s better to be polite. Saved you a click.





  • Yea, I agree 100%. My comment was definitely ambiguous, but I’m not expecting my old phone to get updated with AI tools (though it actually was), more just that I don’t want an AI specific gadget and I don’t think anyone but an enthusiast would. Definitely see these as the new VR, as you mentioned. It seems the article was lamenting product development as though it in itself is an end goal. UX and efficiency should be the end goal. Not just making things for the sake of saying you made something. I obviously support people expressing themselves and experimenting, but the framing in the article is so strange and reads like they’re lamenting the fact that capitalism has reached its latter stages more than anything else.


  • I’m not one to disagree with blaming capitalism lol. I was watching something recently about how millennials grew up with techno optimism, and I feel like we’re seeing the results of that. Millennials wanting tech to solve everything and grew up being into gadgets as a concept rather than a product, and the new generation so subsumed by tech that it really ceases to be tech. Like the way indoor plumbing or even electricity isn’t really seen as tech anymore, even though it really revolutionized our lifestyles. I think there’s some warranted backlash to tech (cottagecore/trad living) and the way it has atomized everyone, and I’m not sure people are as excited about it anymore. Price is definitely an issue, but I really think that tech is failing to fulfill us, and people are seeing that on some level (all this is also somewhat attributable to capitalism).



  • Unfortunately, I can’t speak intelligently as to specifically what should be done with IP, but broad strokes I agree that output should be public domain and public facing models should be open. I do feel as though there should be a way to compensate people for inputs used for internal commercial purposes.

    If there’s training needed for something and it has separate books/video, a company should not be able to throw that into an AI, and generate a new book/video for their internal use. Either they need to make that resource available publicly, or purchase a specific license for internal use of the original material for AI. I don’t know why I think that, mostly just vibes based because if they hired a person/company to do the same I’d be fine with it, so maybe I just have some cognitive dissonance going on, but it feels different. The way that there are commercial and personal licenses, I think having an AI license might make sense. But again, I’m way out of my depth and field of knowledge here, so I could be way off.