Engineer/Mathematician/Student. I’m not insane unless I’m in a schizoposting or distressing memes mood; I promise.

  • 0 Posts
  • 19 Comments
Joined 2 years ago
cake
Cake day: July 28th, 2023

help-circle
  • This is kind of how my life felt before I got medicated for ADHD. Not being able to do things even when they’re super easy (or worse when they are things you want to do but you just can’t get yourself to do them for no fucking reason) is called Executive Dysfunction, and it is the ADHD symptom I probably suffer from the most. Good news: meds can help with this.

    Now, I still feel unmotivated sometimes even on my meds, and general hopelessness from the meaninglessness of existence is ever present.

    However, just the ability to plan and to start tasks without having to spend hours building the motivation is amazing. I just do things when I think about them even when I don’t want to. Like I’ll say, “I have time to put of this work and play video games” and then before I even start playing I decide I might as well do the task first.

    I still don’t get pleasure out of completing tasks, but being able to complete and keep track of tasks means that eventually I reach a point where I don’t have any more tasks to do in the moment, and that peace is incredible.

    It’s so nice not being anxious all the time about all the tasks I need to do because they’re just done.

    Also, meds actually help me sleep soundly and like regularly to the point I don’t really need an alarm. Despite that, they don’t make me feel sleepy during the day. (I should note I also take melatonin before bed so maybe it’s like the combination that leads to perfectly regular sleep idk)

    Anyway, if I were you I might look into talking to a psychiatrist to see if you have ADHD.

    PS: tip for anyone with ADHD meds, if they give you meds that don’t work for you, don’t be scared to ask for a change. Methylphenidate made me super anxious, killed my appetite, and wore off fast. Adderall doesn’t have any noticeable side effects and works well.




  • hihi24522@lemm.eetoTechnology@lemmy.world*Permanently Deleted*
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    1
    ·
    14 days ago

    Valid point, though I’m surprised that cyc was used for non-AI purposes since, in my very very limited knowledge of the project, I thought the whole thing was based around the ability to reason and infer from an encyclopedic data set.

    Regardless, I suppose the original topic of this discussion is heading towards a prescriptivist vs descriptivist debate:

    Should the term Artificial Intelligence have the more literal meaning it held when it first was discussed, like by Turing or in the sci-fi of Isaac Asimov?

    OR

    Should society’s use of the term in reference to advances in problem solving tech in general or specifically its most prevalent use in reference to any neural network or learning algorithm in general be the definition of Artificial Intelligence?

    Should we shift our definition of a term based on how it is used to match popular use regardless of its original intended meaning or should we try to keep the meaning of the phrase specific/direct/literal and fight the natural shift in language?

    Personally, I prefer the latter because I think keeping the meaning as close to literal as possible increases the clarity of the words and because the term AI is now thrown about so often these days as a buzzword for clicks or money, typically by people pushing lies about the capabilities or functionality of the systems they’re referring to as AI.

    The lumping together of models trained by scientists to solve novel problems and the models that are using the energy of a small country to plagiarize artwork also is not something I view fondly as I’ve seen people assume the two are one in the same despite the fact one has redeeming qualities and the other is mostly bullshit.

    However, it seems that many others are fine with or in support of a descriptivist definition where words have the meaning they are used for even if that meaning goes beyond their original intent or definitions.

    To each their own I suppose. These preferences are opinions so there really isn’t an objectively right or wrong answer for this debate


  • hihi24522@lemm.eetoTechnology@lemmy.world*Permanently Deleted*
    link
    fedilink
    English
    arrow-up
    12
    arrow-down
    6
    ·
    14 days ago

    The term “artificial intelligence” is supposed to refer to a computer simulating the actions/behavior of a human.

    LLMs can mimic human communication and therefore fits the AI definition.

    Generative AI for images is a much looser fit but it still fulfills a purpose that was until recently something most or thought only humans could do, so some people think it counts as AI

    However some of the earliest AI’s in computer programs were just NPCs in video games, looong before deep learning became a widespread thing.

    Enemies in video games (typically referring to the algorithms used for their pathfinding) are AI whether they use neural networks or not.

    Deep learning neural networks are predictive mathematic models that can be tuned from data like in linear regression. This, in itself, is not AI.

    Transformers are a special structure that can be implemented in a neural network to attenuate certain inputs. (This is how ChatGPT can act like it has object permanence or any sort of memory when it doesn’t) Again, this kind of predictive model is not AI any more than using Simpson’s Rule to calculate a missing coordinate in a dataset would be AI.

    Neural networks can be used to mimic human actions, and when they do, that fits the definition. But the techniques and math behind the models is not AI.

    The only people who refer to non-AI things as AI are people who don’t know what they’re talking about, or people who are using it as a buzzword for financial gain (in the case of most corporate executives and tech-bros it is both)



  • hihi24522@lemm.eetoComic Strips@lemmy.worldIt's the dishonesty!
    link
    fedilink
    arrow-up
    44
    arrow-down
    2
    ·
    3 months ago

    Remember folks: weather models are based on historical data. As climate change forces weather patterns to break from their historical norms, weather predictions based on previous models will become increasingly inaccurate.

    Though the local, short term predictions shouldn’t be that affected so I have no clue why Siri tells me “it doesn’t look like it will rain today” while there’s literally rain falling outside my window…


  • If it wasn’t clear, I’m well aware of the unlikelihood of the situation. But what’s the harm in believing such? I mean it’s not like either of them is going to come back from the dead and say: “Actually, we argued about the internal weight distribution from astronaut motion, how it would effect the natural frequency of the capsule, and if that effect would be significant enough to need accounting for, not racism.”


  • Fun fact, my grandfather was a leading engineer on the Saturn V and other aerospace projects, and according to my dad he apparently got into arguments with Von Braun. Considering the line of work and knowing some of my grandfather’s written down arguments from that time, it’s likely these arguments were more about random physics than anything else, but I like to think it was about von Braun being a Nazi piece of shit.

    I do know my grandparents were very against segregation to the chagrin of their neighbors, so it’s not entirely unlikely right?


  • To me, it feels like there is a big difference between not realizing you are harming others and purposefully causing that harm because you know it is harm.

    Blindspots in empathy is like saying something that hurt someone because you didn’t know it hurt them. Sadism is saying something that hurts someone because you know it will hurt them.

    There’s definitely a difference between the feeling of sadism and revenge too. One you do because it feels like justice, the other you do because feels like eating candy.

    This kind of ties into the answer to “why would you care?” This is actually something I’ve thought about a lot (big suprise lol) and the conclusion I’ve come to is that morality and empathy are not directly correlated.

    There is a difference between not stabbing someone because you’d feel that pain, and not stabbing someone because you don’t want to be the cause of someone else’s pain.

    Another influence for perceived morality is the desire to be like other people. We’re a social species so lots of us have an innate desire to feel connected to others. Sure there is some desire to be unique but often times that is constrained by the desire to be accepted.

    You can satisfy this feeling Patrick Bateman style like most of the psychopaths I’ve met, where you just put up a facade, doing good things only when you know you’re being watched. Or you can satisfy it by doing what I did—which come to find out is basically cognitive behavioral therapy—trying to make yourself want to do the things others think are good.

    I’m pretty sure this choice is also based on internal drives where people in the former situation want the benefits that come with being a good person or fitting in, while people in the latter case directly want to fit in, we don’t want to act good, we want to be good.

    Honestly that desire to be moral that is separate from empathy can be detrimental. People tend to say that “empathy without bounds is self destruction” but it’s been my experience that the moral obsession is more damaging.

    For example, not eating because your roommates have friends over in the kitchen and you would feel rude to interrupt them, is unhealthy and while it is empathy that may make you think you’ll ruin the flow of their conversation, that pain is minimal compared to the pain of not eating. You don’t do it because the empathy hurts, you do it because violating your overactive moral compass hurts.

    Anyway this is turning into a rant, so I should stop. I do agree with you that most people seem to lack empathy for others and this is largely because they don’t try to see things from other people’s/things’ perspectives. But I disagree with your hypothesis that empathy is what drove me to increase my capacity for empathy in the first place. I think it was driven by much more self centered drives like pride and the desire to be wanted.


  • Fun fact: I think I had to purposefully construct my sense of empathy.

    I was literally like psychopath-sadist when I was really young. I didn’t really enact anything irl besides torturing bugs or imagining cartoon characters in pain, but around 4yo I started feeling like I was a bad person because other people didn’t seem to desire to do those things, in fact hero’s in movies purposefully avoided violence.

    So the shame/guilt of feeling like I was a monster, a the desire to be like everyone else, lead me to try and make myself feel pain when I hurt other things. When my mother or sisters would tell me to come kill a spider I’d pinch myself or bite my tongue while doing so.

    Then, being a curious kid, I started just trying to imagine the physical sensations of being in different bodies and having different injuries. This eventually spread to trying to imagine different emotions and by and by I didn’t have to force myself to feel it anymore. When I see someone/something get hurt, I don’t have to think about it now, I just feel it.

    While I’ll admit it is possible that I’m correlating this purposeful imagination with some possible natural development of my brain creating empathy, considering that until recently I only really felt pain, negative emotions, and physical sensations through empathy, I’d say it seems most likely I built it myself.

    Since realizing this a few years ago, I have started trying to feel happy/positive empathy too and it does seem like it’s been working. Though, it’s slow going because I’m hella antisocial lol.

    Oh and just in case anyone is worried, I’m no longer sadistic at all. I literally can’t bring myself to kill spiders or other bugs, and there are some scenes in movies I can’t stand to watch. I can unfortunately still feel those old feelings and empathize with sadistic characters/actions, but the saccharine feeling of enjoying causing pain actually makes me physically sick now.


  • Executive dystfunction is a symptom of ADHD and one that I have a hard time explaining to others. Most people I know don’t understand that even if I actually want to do something, sometimes I literally just can’t start doing it or I have to do weird shit like this to like talk myself into it.

    Getting medicated helps a lot if you find the right meds. Honestly the current meds I’m on don’t help as much with focus, but they do help with just being able to fucking do shit and that’s the greater benefit in my opinion. The fact I can just think “oh I should do the dishes” and then start doing the dishes without having to think about doing it for half an hour before starting is still mind blowing to me sometimes.





  • I would assume the downvotes are more for the “religion is a framework to be shitty” part. I’m also going to get downvoted for a similar reason.

    Religion is justification for one’s moral compass / desires.

    You see people who think it’s morally okay to rape kids or take away women’s rights or the rights of trans people or the rights of gay people etc. These people can’t justify morals (or lack thereof) logically so they use religion to give them a false sense of rationality. Hence you think religion is a framework for being shitty.

    However, there are other people who use religion to justify “good” behavior like compassion and acceptance. These people are still reliant on fallacious beliefs, but their actions are not “shitty” so they get offended. Furthermore, others—who know people in this second category—may also think the remark about religion being shitty is not correct and is rude. That’s why it’s getting downvoted.

    Fun sidenote, we can actually formally prove that religion or at least absolute morality doesn’t matter, and that people will just do what they want no matter what:


    Proof. We seek to prove that people do whatever they want regardless of the existence of a god or absolute morality. We have three natural cases:

    Case 1: Assume neither god nor an absolute purpose/morality exists. Then a person will default to their own morals. Hence, if neither exists, people will do whatever they want.

    Case 2: Assume a god or purpose/morality exists that does not align with a person’s current morals. (For example a god that required you to strangle six puppies every year or required human sacrifice, or raping kids, or blowing up hospitals, or working in finance, etc.). Then this person will not follow that god/purpose because they are a bad god/purpose. Hence, a person will do whatever they feel is right regardless even with the existence of a true deity/purpose when that god/purpose does not share their morals.

    Case 3: Assume a true god or purpose does exist and that it aligns with the morality of a person. Then that person will be living that way anyway, so the existence of the god or purpose has no effect on them doing whatever they want.

    In each case a person will do whatever they want regardless of the existence or non existence of a god or a true purpose/morality. Q.E.D.


    I should note that while I did come up with this proof myself several years ago, I learned later that Marcus Aurelius and other philosophers beat me to the punch by several centuries. But hey philosophy is the study of understanding existence, if we both exist in the same existence we can and should be able to discover the same facts about reality.



  • Wait. I just realized energy also creates a gravitational pull, and the death star’s whole thing is destroying a planet right? That’s got to take a huuuge amount of energy because the explosion has to massively overcome the gravity holding the target together.

    A quick google search says you’d need 10^32 Joules to blow up the earth. E=mc2 so dividing that energy by the speed of light squared gives about 1.1e15 kg of equivalent mass which is relatively small compared to earths mass (6e24) but still large.

    For reference, if the radius of the Death Star was 1000m you’d get about 5.2m/s2 acceleration from just that energy in its core.

    But if the Death Star is able to blow up multiple planets, then the energy it has to have on hand goes up. So if the Death Star contains enough energy to blow up 5.4 billion planets, then just that stored energy would have nearly equivalent “mass” to the earth.

    But gravitational acceleration is inversely proportional to distance squared. So since the Death Star is small, you wouldn’t need that much energy to get earth gravity. If we assume the Death Star has about a 160km radius, then you’d only need enough stored energy to blow up ~45,000 earths to get a surface gravity of 9.1m/s2.

    This gravity would increase as you got closer to the core or whatever part stores all that energy. But if you spread that energy out a bit you could probably extend how large the earth-like gravity range in the station would be.

    The mass of the structure itself would contribute to the gravity too so that 45,000 is probably an overestimate.

    TL;DR: From rough math in my head, assuming a radius of 160km, point mass, and ignoring the mass of the structure, you’d only need to store ~5e19 J of energy in the Death Star to get earth like gravity on the surface. That is approximately the amount of energy required to blow up 45,000 earths


  • Dark Souls remastered. Getting cursed just before reaching a boss and having no money to buy a cure forced me to either give up, grind, or “get gud.”

    I beat the boss without getting hit once. I know other people probably do that for every boss but for me that’s a big achievement since I suck at combat and video games in general.

    In other news, the game is hard but beautiful and the level design is pretty impressive. I’m looking forward to marathoning the other souls games after this.