Parsel: A (De-)compositional Framework for Algorithmic Reasoning with Language Models - Stanford University Eric Zelikman et al - Beats prior code generation sota by over 75%!
Paper: Github: Twitter: Website: Code Generation on APPS Leaderboard: Abstract: > Despite recent success in large language model (LLM) reasoning, LLMs struggle with hierarchical multi-step reasoning tasks like generating complex programs. For these tasks, humans often start with a high-level algorithmic design and implement each part gradually. We introduce Parsel, a framework enabling automatic implementation and validation of complex algorithms with code LLMs, taking hierarchical function descriptions in natural language as input. We show that Parsel can be used across domains requiring hierarchical reasoning, including program synthesis, robotic planning, and theorem proving. We show that LLMs generating Parsel solve more competition-level problems in the APPS dataset, resulting in pass rates that are over 75% higher than prior results from directly sampling AlphaCode and Codex, while often using a smaller sample budget. We also find that LLM-generated robotic plans using Parsel as an intermediate language are more than twice as likely to be considered accurate than directly generated plans. Lastly, we explore how Parsel addresses LLM limitations and discuss how Parsel may be useful for human programmers.

Database of useful AI powered tools
- -

> I used ChatGPT to write the story and ElevenLabs Prime Voice AI to read the entire thing. Both of these services are free to use. > > — By [u/Ortamis]( on [Reddit](

Google’s MusicLM: Text Generated Music & It’s Absurdly Good
[MusicLM: Generating Music From Text (from Google) AI](

Paper: [Breakthrough Google AI From DeepMind "AdA" Can Learn Millions of Tasks At Human Level, Each In Minutes Without Needing Training Data | New InstructPix2Pix Text-To-Image-Editing Artificial Intelligence | New OMMO Ariel View Synthesis](

chatGPT rival drops in: AnthropicAI releases “Claude” Around mid December released a very interesting paper Constitutional AI: Harmlessness from AI Feedback Now we got our first demonstration of Constitutional AI in action. It automates the final phase of RLHF (reinforcement learning from human feedback) by generating its own training examples from a bunch of rules, an "AI Constitution" so to speak. The trend of AI models creating datasets is only going to accelerate. It's a direct way for AI to improve itself.

cross-posted from: [Apple AI audiobook narration - TechLinked](

Meta AI announces OPT-IML: a new language model with 175B parameters, fine-tuned on 2,000 language tasks — openly available soon under a noncommercial license for research use cases.
> For OPT-IML, we boosted performance of our OPT-175B work using instruction tuning, allowing us to adapt it for more diverse language applications (Ex. Answering Q’s, summarizing text & translation). > > A 30B parameter model is available today with a 175B model available soon.

AR glasses showing deaf people text from what they can’t hear are finally ready. 13 years after Ray Kurzweil predicted it would happen.
Now the question is availability and price. Basically all deaf people in the world need these, not just a small sample group.

The author argues that the transformation of society caused by automation has been happening for a long time, but what's different now is the rate of progress, which is significantly faster than a human career and can only get faster. The author suggests that we need to be asking ourselves what we want society to change into and whether we want to work like gas station attendants or change society so that displacement of work caused by AI leads to an increase in wealth, health, and happiness.

    Create a post

    Everything pertaining to the technological singularity and related topics, e.g. AI, human enhancement, etc.


    • Only post here if you already believe and look forward to the singularity.

    Check out the Technological Singularity FAQ

    • 0 users online
    • 1 user / day
    • 2 users / week
    • 6 users / month
    • 19 users / 6 months
    • 30 subscribers
    • 89 Posts
    • Modlog