

What about a non-commercial alternative to strava? It’s core feature is literally push a button, gps tracking, push a button and stop, then some tagging


What about a non-commercial alternative to strava? It’s core feature is literally push a button, gps tracking, push a button and stop, then some tagging


“Scientists could soon use…”?
What the kind of articles are you sending us? That is clear bullshit


Thank you!


That Cordyceps brain infection looked pretty bad in that Last of Us documentary
I’m just some arrogant guy on the internet and not some arbiter of correctness, but something along the lines of adding a link pointing to the original material along with the amazing image and quote. I think it would be this:
https://apod.nasa.gov/apod/ap260207.html
But, it seems like I sent the wrong link before so that probably confused things. Sorry about that


One of the issues with me trying to understand more here was that this thread blew up so it was tough to keep threads straight and I made mistakes replying to a comment assuming it was isolated but it was answered elsewhere or the same person made a lot of comments.
I see you made a few comments and the other two seem to be pretty low information and not in good faith… so I guess I’ll find that button for you too


Fucking hilarious comment in the conversation tree to add this comment to. (It assumes context with UN Watch and knowing what specific views this community holds of it). It highlights why I left the community and blocked a couple of people in this conversation. Feel free to do the same to me
I’m glad to see people finding interesting and beautiful things to post, I just don’t understand copying content and not sharing the link. Copying to me results in a quick “that’s pretty”. Sharing the link helps the creator and anyone new that wants to learn more about the content (and in this case could be introduced to the picture of the day). It’s such a small amount of work from the poster so that every single person that sees it can get a little more out of it.
Mostly though, I just don’t understand why you attributed those involved so well (far above what I expected from anyone, even taking the time to format links and look them up) but didn’t do a quick copy paste of where you saw it. I’m not even trying to have a dig at your behaviour here, it’s just something I don’t relate to.
I hope you keep posting, I just think the link adds a lot and helps avoid things like CG.


Cold Open: Jake Pranks Holt - Brooklyn Nine-Nine
Why go to so much effort with the credits but not deep link to post on NASA’s picture of the day?
https://apod.nasa.gov/apod/ap260207.html
Also, this blog post links out to more information in the text that was copy pasted in the OP
Edit: Corrected URL


I swapped from Voyager to this and far prefer Mlem


I have sources, but I won’t show them, as I’ve noticed that people on Lemmy are very negative and unlikely to understand anything even if they see it with their own eyes
🤔


I found this Reddit list, but I thought I’d see what Lemmy comes up with:
https://www.reddit.com/r/stocks/comments/1k0hvd4/what_assets_are_the_best_alternatives_to_us/
This seems to be the source:
https://science.nasa.gov/photojournal/global-image-of-io-true-color/


For the measurement:
We utilize data from Time 10 (2018/2019) of the New Zealand Attitudes and Values Study (NZAVS), the largest cross-sectional sample to date that contains all our focal variables. The NZAVS is an ongoing national probability panel study of New Zealand adults that began in 2009 and examines myriad variables, including personality, social attitudes, and health and well-being each year. The NZAVS was approved by the (University of Auckland Human Ethics Committee) Human Ethics Committee and is renewed every 3 years. Although the data presented in this study are not publicly available due to restrictions imposed by our Ethics Committee, a deidentified data set containing the variables analyzed here is available upon request from the authors for the purpose of replication. The NZAVS uses extensive recruitment strategies to ensure broad national representation and achieves large sample sizes (i.e., N = 47,948 in the wave utilized in the present study), which helps mitigate concerns about sampling error and enhances the stability and precision of parameter estimates. Sibley (2021) provided full details of the sampling procedure, retention rates, and ethics approvals for the NZAVS (see also the NZAVS Open Science Framework page at https://osf.io/75snb/?view_only=dc7e2214ec194a63a0401a442e69354d).
Gender identity centrality was assessed with a single item adapted from Leach et al. (2008): “Being a woman/man is an important part of how I see myself.”
Sexual prejudice was assessed using one item adapted from the Pew Forum on Religion and Public Life U.S. Religious Landscape Survey (Pew Research Center, 2007): “I think that homosexuality should be accepted by society” (reverse-scored).
Disagreeableness was measured by reverse-scoring participants’ responses to the Agreeableness facet of the Mini-International Personality Item Pool–6 (Sibley et al., 2011). Participants were asked to respond to four items: I … (a) “sympathize with others’ feelings” (reverse-coded), (b) “feel others’ emotions” (reverse-coded), © “am not interested in other people’s problems,” and (d) “am not really interested in others” (α = .71).
Narcissism was assessed using three of the highest loading items from Campbell et al.’s (2004) Psychological Entitlement Scale. Participants rated how strongly they (a) “feel entitled to more of everything,” (b) “deserve more things in life,” and © “demand the best because I’m worth it” (α = .70).
Hostile and benevolent sexism were captured using 10 items from the 22-item Ambivalent Sexism Inventory (Glick & Fiske, 1996). Hostile sexism was measured using the mean of items 5, 11, 14, 15, and 16 (e.g., “Women seek to gain power by getting control over men”; α = .84). Benevolent sexism was measured using the mean of items 8, 9, 12, 19, and 22 (e.g., “Women should be protected and cherished for by men”; α = .70).
Opposition to domestic violenceprevention was measured with a single item (Sibley et al., 2020). Participants were asked to rate how strongly they support “Greater investment in reducing domestic violence” on a 1 (strongly oppose) to 7 (strongly support; reverse-scored) scale.
Social dominance orientation was assessed using the mean of six items from Sidanius and Pratto’s (2001) 16-item SDO6 scale: (a) “It is OK if some groups have more of a chance in life than others,” (b) “Inferior groups should stay in their place,” © “To get ahead in life, it is sometimes okay to step on other groups,” (d) “We should have increased social equality” (reverse-scored), (e) “It would be good if groups could be equal” (reverse-scored), and (f) “We should do what we can to equalize conditions for different groups” (reverse-scored; α = .74).
The analysis to find the profiles:
We followed Johnson’s (2021) recommendations and used Mplus Version 8.10 to estimate LPAs with between 1 and 10 profiles under four distinct variance–covariance structures (see Supplemental Tables S3)
To assess model fit, we examined the Akaike information criterion (AIC), the Bayesian information criterion (BIC), and the sample-size adjusted BIC (aBIC), with lower values indicating relatively better model fit. We also examined the Lo–Mendell–Rubin adjusted likelihood ratio test (see Lo et al., 2001; Nylund et al., 2007; Vermunt, 2024) and the Parametric Bootstrapped Likelihood Ratio Test (Curran & Bauer, 2021) to determine if a model with k-profiles significantly improves model fit relative to the k − 1 profile solution. Finally, we evaluated the entropy of the different model solutions, with values closer to 1.0 indicating clearer separation into distinct profiles (see Collins & Lanza, 2010).
the BIC under the Type 1 variance–covariance structure increased after the eighth profile, suggesting model fit declined beyond this point. Moreover, models with nine and 10 profiles did not converge. The Lo–Mendell–Rubin adjusted likelihood ratio test for the eighth profile was nonsignificant (p = .99), indicating that adding an eighth profile did not improve model fit relative to the seven-profile solution. Of the seven models that converged and produced improvements to model fit, the five-profile solution had the highest entropy (0.82), indicating a clear separation of these data into distinct profiles (see Collins & Lanza, 2010). Both the Lo–Mendell–Rubin adjusted likelihood ratio test and Bootstrapped Likelihood Ratio Test also supported the five-profile solution over the four-profile solution.
Here are the five profiles they discuss:



Here’s the actual article and abstract:
https://psycnet.apa.org/fulltext/2027-02373-001.html
Despite being frequently discussed in both mainstream discourse and academic scholarship, little empirical work defines “toxic masculinity.” We address this oversight by estimating the prevalence of men’s distinct response patterns to eight indicators of problematic masculinity: gender identity centrality, sexual prejudice, disagreeableness, narcissism, hostile sexism, benevolent sexism, opposition to domestic violence prevention initiatives, and social dominance orientation. Latent profile analysis of a nationwide random sample of heterosexual men from New Zealand (N = 15,808) identified five profiles. The largest profile (35.4%), “Atoxics,” scored low across all focal measures, whereas two other profiles (totaling 53.8%) expressed low-to-moderate support across indicators. The remaining two profiles reflected distinct forms of problematic masculinity marked by contrasting forms of sexism: “Benevolent Toxics” (7.6%) and “Hostile Toxics” (3.2%). Notably, gender identity centrality was only a weakly informative indicator of problematic masculinity. We thus demonstrate the need to separate problematic masculinity from other constructive forms of masculinity.


It was not uncommon for me to hit warnings that I’d used 50G. I changed the warnings to 70G at some point and still got the warning a few times


Seems to be an opinion piece that doesn’t say much and uses many words to get there.
In the last half they described science using some concrete examples with microplastics:
But inevitably, the analytical researchers, mainly chemists, wrote horrified letters to journal editors. They contend, for example, that the methods being used can read ordinary bodily fats in a sample as plastics, potentially giving false readings; that there weren’t proper corrections for the amount of background plastic in the laboratory; and that more controls were needed.
The clinical teams have replied that there is a steep learning curve, and that this sort of work hasn’t been done in biological material before. Maybe some more controls would help, but more background plastics wouldn’t account for some things, such as that five-fold difference in heart attacks. And it isn’t at all clear whether any of these methodological shortcomings mean that there aren’t microplastics in humans, or that they aren’t having ill effects. They just raise uncertainties.
Eventually, the analytical experts will start working more closely with the clinical crowd, and they will all learn to measure microplastics robustly in human tissue and investigate possible impacts on health. That is, if the agencies that fund scientific research keep funding them.
And I think the rest of the argument is just that the longer it takes to get the uncertainties out of the science, the more opportunities there are for science deniers to manipulate the messaging and this process has happened multiple times (“from DDT to cigarette smoke, to ozone destroyers to greenhouse gases”).
All that seems accurate, but it fits in two paragraphs.
Yes… liquor, guns, driving, and physical punishment should solely be parents choice. Wait… those caused issues and the government decided to mitigate some of the negative consequences?