• 1.27K Posts
  • 583 Comments
Joined 9M ago
cake
Cake day: Apr 18, 2022

help-circle
rss

Guterres is right.

But the biggest insanity is that humanity is letting this happen. Year after year of warnings go by without us reacting. These companies, backed by powerless externally controlled politicians, are harming us, our children and future generations.

We sit silently waiting. Waiting for someone else to prevent the catastrophe for us. We sit still while they rape our democracies and our livelihoods. Who wants to be the first to be so stupid as to put restrictions on the status quo? When we in the rich West are profiting so beautifully from our parasitic way of life?

Anyone who even occasionally consumes the news is aware that society has a serious problem. > Passively waiting and ignoring the danger is an active choice. Doing nothing is a supposedly easy choice. The easy way out in the short term? None of us has the power to change anything? So we just lie down and die? Give up without a fight?

But let me get this straight. We really want to knowingly just watch our children’s livelihoods being taken away? We don’t want to protect and care for each other and fight for a world worth living in? My friends, I don’t get it.

We all have the power to change. Together we have all the power.

Now is the time to stick together and unite behind our common values. Now is the time we want to leave selfishness and greed behind. Now is the time for real change. Now is the time to get serious. Now is the time for survival.

All the letters have been written, all the petitions submitted, the conferences have ended, all the fears and concerns expressed. But the messages drip off like on a Teflon pan without a mark. Now is the time to get serious.

Take care of your loved ones. Take care of those who are fighting for survival and those who can’t. Get serious!

— By @climateemergency@mastodon.social on Mastodon



[Americans are retiring to Vietnam, for cheap healthcare and a decent standard of living](https://www.latimes.com/world-nation/story/2019-12-25/americans-are-retiring-to-vietnam-for-cheap-health-care-and-a-decent-living-standard)










ChatGPT alternatives
...






Parsel: A (De-)compositional Framework for Algorithmic Reasoning with Language Models - Stanford University Eric Zelikman et al - Beats prior code generation sota by over 75%!
Paper: https://arxiv.org/abs/2212.10561 Github: https://github.com/ezelikman/parsel Twitter: https://twitter.com/ericzelikman/status/1618426056163356675?s=20 Website: https://zelikman.me/parselpaper/ Code Generation on APPS Leaderboard: https://paperswithcode.com/sota/code-generation-on-apps Abstract: > Despite recent success in large language model (LLM) reasoning, LLMs struggle with hierarchical multi-step reasoning tasks like generating complex programs. For these tasks, humans often start with a high-level algorithmic design and implement each part gradually. We introduce Parsel, a framework enabling automatic implementation and validation of complex algorithms with code LLMs, taking hierarchical function descriptions in natural language as input. We show that Parsel can be used across domains requiring hierarchical reasoning, including program synthesis, robotic planning, and theorem proving. We show that LLMs generating Parsel solve more competition-level problems in the APPS dataset, resulting in pass rates that are over 75% higher than prior results from directly sampling AlphaCode and Codex, while often using a smaller sample budget. We also find that LLM-generated robotic plans using Parsel as an intermediate language are more than twice as likely to be considered accurate than directly generated plans. Lastly, we explore how Parsel addresses LLM limitations and discuss how Parsel may be useful for human programmers. https://preview.redd.it/tlija53is6fa1.jpg?width=811&format=pjpg&auto=webp&v=enabled&s=a58ec9215ce75dc2437a630dc9597806194da498 https://preview.redd.it/fc2bb93is6fa1.jpg?width=1638&format=pjpg&auto=webp&v=enabled&s=0d11527496bb4f7e9f53df69df397f892828e8ef https://preview.redd.it/nr4qy83is6fa1.jpg?width=711&format=pjpg&auto=webp&v=enabled&s=e18e5b6c51a68305d195faaf4c92e78914d078a6 https://preview.redd.it/afko1a3is6fa1.jpg?width=1468&format=pjpg&auto=webp&v=enabled&s=5f91482aa9a6a275e03f85c13ea4593d9b958d02 https://preview.redd.it/p2omd73is6fa1.jpg?width=1177&format=pjpg&auto=webp&v=enabled&s=1f3c793b6e548c8c5e0227e94fb18b879bdbbeff


[🧲 Magnet link: magnet:?xt=urn:btih:7e0ac90b489baee8a823381792ec67d465488fef&dn=yandexarc](magnet:?xt=urn:btih:7e0ac90b489baee8a823381792ec67d465488fef&dn=yandexarc)

``` [00:00.000 --> 00:04.560] We're 1.2154% towards building the best search engine in the world. [00:04.560 --> 00:05.920] And I'll show you how I came up with that number. [00:05.920 --> 00:08.160] Mediawiki has about 114,000 commits. [00:08.160 --> 00:10.560] Currently we have 225 commits. [00:10.560 --> 00:12.880] Google has an index of about 100 billion pages. [00:12.880 --> 00:15.120] We want to crawl those pages once a month. [00:15.120 --> 00:17.600] We want to crawl 3 billion pages a day. [00:17.600 --> 00:19.600] And we're currently crawling 1 million. [00:19.600 --> 00:22.320] For our offline evaluation of search ranking, [00:22.320 --> 00:25.040] we're using NDCG to score our rankings. [00:25.040 --> 00:28.400] We currently have about 10%, we want to get to about 80%. [00:28.400 --> 00:30.320] We want to have 250 blog posts. [00:30.320 --> 00:31.360] We've only got two. [00:31.360 --> 00:32.960] We want to have 1,000 videos. [00:32.960 --> 00:34.560] We've only got 6. [00:34.560 --> 00:37.920] We want to have about 100,000 active volunteers each month. [00:37.920 --> 00:39.760] That's roughly what Wikipedia has. [00:39.760 --> 00:41.680] We've only got about 26. [00:41.680 --> 00:44.400] We want to build our organization to about 20 employees. [00:44.400 --> 00:45.040] We have none. [00:45.040 --> 00:46.400] We don't even have an organization. [00:46.400 --> 00:48.960] We want to incorporate as a non-profit. [00:48.960 --> 00:52.000] Currently we have a total of 11,121 points. [00:52.000 --> 00:55.920] Out of a possible maximum of 915,000. [00:55.920 --> 00:59.920] Which means... ``` You can help by using [the browser extension](https://addons.mozilla.org/en-GB/firefox/addon/mwmbl-web-crawler/) that crawls one page each second. Number of pages crawled per day: ``` day | count ---------------------+--------- 2023-01-28 00:00:00 | 701195 2023-01-27 00:00:00 | 753338 2023-01-26 00:00:00 | 771691 2023-01-25 00:00:00 | 823852 2023-01-24 00:00:00 | 952735 2023-01-23 00:00:00 | 1005805 2023-01-22 00:00:00 | 1089965 2023-01-21 00:00:00 | 1121781 2023-01-20 00:00:00 | 1092852 2023-01-19 00:00:00 | 1223518 2023-01-18 00:00:00 | 906054 2023-01-17 00:00:00 | 745636 2023-01-16 00:00:00 | 692705 2023-01-15 00:00:00 | 677468 2023-01-14 00:00:00 | 1069739 2023-01-13 00:00:00 | 1011536 2023-01-12 00:00:00 | 996143 2023-01-11 00:00:00 | 980235 2023-01-10 00:00:00 | 896543 2023-01-09 00:00:00 | 498350 ``` We just need to multiply this by about 3,000. Totally achievable, given how early on we are in this project













What are the security implications of installing a pirated apk using Shelter?
I want to make sure that this apks don't send private data through to the server. I don't think this is what Shelter is for.

Actually I’d find even better if there was an instance that imported everything from Reddit that way people could actually interact with the content not like in libreddit.


Ideas for a Reddit importer script?
I'm looking for the script to import content from Reddit. > Someone on lemmygrad wrote a script for this. > > — By [@nutomic@lemmy.ml](https://lemmy.ml/u/nutomic) on [Github](https://github.com/LemmyNet/lemmy/issues/119#issuecomment-1383227518) I want to import top posts and comments so that I only have to use Lemmy instead of Lemmy and [libreddit](https://github.com/spikecodes/libreddit). I want to adapt the script to choose a number of top Reddit posts for each community based on the users per month. Got the idea from this issue: [The rank of a post in the aggregated feed should be inversely proportional to the size of the community](https://github.com/LemmyNet/lemmy/issues/1026) Each comment could be something like: > comment body > > posted by u/reddit-user on [comment link](https://reddit.com) Is there a better way to choose the posts other than a minimum score? Something that chooses around the same number of posts from each subreddit? I think I could take the median score of the posts on a subreddit and select the posts above it. Any other ideas for this? Edit: [found it](https://github.com/rileynull/RedditLemmyImporter). It's made in kotlin so I won't do this for now.


[TikTok Ban in US before 2024](https://www.metaculus.com/questions/14062/tiktok-ban-in-us-before-2024/)






They haven’t contacted because there’s no sign of intelligence here.


They exist, but they are hiding in another dimension.


They exist, but they are observing us from afar.


They exist, but they don’t want us to know.


They exist, but don’t want to contact us.


They exist, but they are using non-detectable communication.



Intelligent civilizations don’t exist.



We’re the first ones and we’ll be extinct before there are any others.




demo: https://huggingface.co/spaces/timbrooks/instruct-pix2pix github: https://github.com/timothybrooks/instruct-pix2pix project page: https://www.timothybrooks.com/instruct-pix2pix/



There are a few alternatives to using bookmarks for organizing URLs. One option is to use an online bookmarking service such as Delicious or Pinboard, which allow you to store and organize your bookmarks in the cloud. These services also allow you to share your bookmarks with other users and access them from any device. Another option is to use a bookmark manager, such as Raindrop or Bookmark Ninja, which provide features such as tagging, categorizing, and searching for bookmarks. You can also use a web browser extension, such as OneTab or LinkKeeper, which can help you organize and manage your bookmarks. Finally, if you prefer to store your bookmarks locally, you can use a program such as Evernote or Zotero, which provide more powerful features for organizing and managing bookmarks.

— YouChat




Database of useful AI powered tools
- https://theaigeek.com/ - https://www.futurepedia.io/

Google’s MusicLM: Text Generated Music & It’s Absurdly Good
[MusicLM: Generating Music From Text (from Google) AI](https://google-research.github.io/seanet/musiclm/examples)

It’s sadly over for voice actors. Just like for artists not enough people will care for there to be social change.


Studies have concluded that stevia extract does not contain fermentable carbohydrates and does not produce lactic acid, which are both factors in causing cavities and tooth decay[1][2]. Therefore, it is generally accepted that stevia does not cause cavities[3][4][5].

 [1]: https://www.dentistrywithaheart.com/blog/the-relationship-between-stevia-extract-and-cavities
 [2]: https://pubmed.ncbi.nlm.nih.gov/26192983
 [3]: https://www.deltadentalia.com/a-healthy-life/dental-health/sugar-swap-showdown-xylitol-vs-stevia/#:~:text=Derived%20from%20the%20Stevia%20plant,t%20contribute%20to%20tooth%20decay.
 [4]: https://sweetlysteviausa.com/blog/post/is-stevia-tooth-friendly
 [5]: https://www.smilesofbellevue.com/2018/06/13/your-teeth-should-steer-clear-from-these-bad-foods

Perplexity


That could be automated. A possible future feature. I don’t care about it enough to open an issue though.


From the one I think most likely to least likely:

  • Most experts I’ve read say if the AGI can improve itself and it’s not aligned with human values. That would mean the end for humanity. When it becomes intelligent enough it would do it’s own thing treating us the way we treat ants. I think this is the most likely outcome since there seems to be a race for AGI without much thought for safety.
  • I think if a corporation was the first to develop an AGI and it was aligned with its values, then they would try to increase profits at the expense of the world and everyone else, just like they usually do.
  • If AGI was aligned with human values then it would probably be as flawed as humans and we would all die.
  • If AGI was aligned to each person then some psychopath could use it to destroy the world, the galaxy or the universe.

I’m finding hard to think of the outcome that would create an utopia.

  • If humans managed to modify their emotions in some way then maybe aligning AGI with our values could be safe?

Someone could have a custom instance with an affinity page that showed other users sorted by percentage affinity with the current user.


I don’t know why the first thing that came to mind after learning this is how this could be used to troll by replying with a comment every time you get a downvote with

@username@instance

“I’m watching you gesture” meme/gif/custom emoji

@OptimusPrime@lemmy.ml


I downvoted for two reasons:

  • I don’t like to read through Twitter/Mastodon threads that should be written as a blog post or Reddit/Lemmy post and linked on Twitter/Mastodon.
  • the US court has already ruled that scraping internet content is legal and I doubt any amount of discussion is going to change that. I also doubt US companies will comply if consent is necessary in other countries.

DARPA’s new Skynet prototype will make sure humanity never burns dinner ever again




The title is not a summary of the video. Just an idea that came to mind watching it.


Dubbing any video is something I wasn’t expecting to see so soon. AI keeps surprising me.


I know English, Spanish and French. Now I would have chosen to learn Mandarin instead of Spanish and French.


I had forgotten about popcorntime. I think that’s the best alternative right now.


They will probably be picking easy cases in the beginning to promote it saying “look! it works fine, you don’t need to pay a lawyer pay us instead”.