Singularity
!singularity
help-circle
rss

Parsel: A (De-)compositional Framework for Algorithmic Reasoning with Language Models - Stanford University Eric Zelikman et al - Beats prior code generation sota by over 75%!
Paper: https://arxiv.org/abs/2212.10561 Github: https://github.com/ezelikman/parsel Twitter: https://twitter.com/ericzelikman/status/1618426056163356675?s=20 Website: https://zelikman.me/parselpaper/ Code Generation on APPS Leaderboard: https://paperswithcode.com/sota/code-generation-on-apps Abstract: > Despite recent success in large language model (LLM) reasoning, LLMs struggle with hierarchical multi-step reasoning tasks like generating complex programs. For these tasks, humans often start with a high-level algorithmic design and implement each part gradually. We introduce Parsel, a framework enabling automatic implementation and validation of complex algorithms with code LLMs, taking hierarchical function descriptions in natural language as input. We show that Parsel can be used across domains requiring hierarchical reasoning, including program synthesis, robotic planning, and theorem proving. We show that LLMs generating Parsel solve more competition-level problems in the APPS dataset, resulting in pass rates that are over 75% higher than prior results from directly sampling AlphaCode and Codex, while often using a smaller sample budget. We also find that LLM-generated robotic plans using Parsel as an intermediate language are more than twice as likely to be considered accurate than directly generated plans. Lastly, we explore how Parsel addresses LLM limitations and discuss how Parsel may be useful for human programmers. https://preview.redd.it/tlija53is6fa1.jpg?width=811&format=pjpg&auto=webp&v=enabled&s=a58ec9215ce75dc2437a630dc9597806194da498 https://preview.redd.it/fc2bb93is6fa1.jpg?width=1638&format=pjpg&auto=webp&v=enabled&s=0d11527496bb4f7e9f53df69df397f892828e8ef https://preview.redd.it/nr4qy83is6fa1.jpg?width=711&format=pjpg&auto=webp&v=enabled&s=e18e5b6c51a68305d195faaf4c92e78914d078a6 https://preview.redd.it/afko1a3is6fa1.jpg?width=1468&format=pjpg&auto=webp&v=enabled&s=5f91482aa9a6a275e03f85c13ea4593d9b958d02 https://preview.redd.it/p2omd73is6fa1.jpg?width=1177&format=pjpg&auto=webp&v=enabled&s=1f3c793b6e548c8c5e0227e94fb18b879bdbbeff


Database of useful AI powered tools
- https://theaigeek.com/ - https://www.futurepedia.io/

> I used ChatGPT to write the story and ElevenLabs Prime Voice AI to read the entire thing. Both of these services are free to use. > > — By [u/Ortamis](https://reddit.com/user/Ortamis) on [Reddit](https://reddit.com/r/singularity/comments/10nkwq0/ai_reads_you_a_story_it_is_literally/)

Google’s MusicLM: Text Generated Music & It’s Absurdly Good
[MusicLM: Generating Music From Text (from Google) AI](https://google-research.github.io/seanet/musiclm/examples)










Paper: https://arxiv.org/abs/2301.07608 [Breakthrough Google AI From DeepMind "AdA" Can Learn Millions of Tasks At Human Level, Each In Minutes Without Needing Training Data | New InstructPix2Pix Text-To-Image-Editing Artificial Intelligence | New OMMO Ariel View Synthesis](https://yt.artemislena.eu/watch?v=dz-wjK07fH4)




chatGPT rival drops in: AnthropicAI releases “Claude”
https://mobile.twitter.com/search?q=anthropic%20claude&src=typed_query Around mid December Anthropic.ai released a very interesting paper Constitutional AI: Harmlessness from AI Feedback Now we got our first demonstration of Constitutional AI in action. It automates the final phase of RLHF (reinforcement learning from human feedback) by generating its own training examples from a bunch of rules, an "AI Constitution" so to speak. The trend of AI models creating datasets is only going to accelerate. It's a direct way for AI to improve itself.


cross-posted from: https://lemmy.ml/post/694359 [Apple AI audiobook narration - TechLinked](https://youtube.com/watch?v=v7UDOlE7WJ8&t=7m2s)


Meta AI announces OPT-IML: a new language model with 175B parameters, fine-tuned on 2,000 language tasks — openly available soon under a noncommercial license for research use cases.
> For OPT-IML, we boosted performance of our OPT-175B work using instruction tuning, allowing us to adapt it for more diverse language applications (Ex. Answering Q’s, summarizing text & translation). > > A 30B parameter model is available today with a 175B model available soon.










AR glasses showing deaf people text from what they can’t hear are finally ready. 13 years after Ray Kurzweil predicted it would happen.
Now the question is availability and price. Basically all deaf people in the world need these, not just a small sample group.




The author argues that the transformation of society caused by automation has been happening for a long time, but what's different now is the rate of progress, which is significantly faster than a human career and can only get faster. The author suggests that we need to be asking ourselves what we want society to change into and whether we want to work like gas station attendants or change society so that displacement of work caused by AI leads to an increase in wealth, health, and happiness.



Singularity
!singularity
    Create a post

    Everything pertaining to the technological singularity and related topics, e.g. AI, human enhancement, etc.

    Rules:

    • Only post here if you already believe and look forward to the singularity.

    Check out the Technological Singularity FAQ

    • 0 users online
    • 1 user / day
    • 2 users / week
    • 6 users / month
    • 19 users / 6 months
    • 30 subscribers
    • 89 Posts
    • 61 Comments
    • Modlog