Josh Beckman (www.joshbeckman.org) Subscribe pruning for sparsity – turns out some LLMs work just as well if you set 60% of the weights to zero (though this likely isn’t true if you’re using Chinchilla-optimal training) Josh Beckman Reference Notesllm, optimization Nathan Labenz on AI Pricing Tyler Cowen 2023, January 08, Sunday Permalink Edit Widgets Comments & Replies via email You can subscribe or follow or reply here: Network Graph Legend Keyboard Shortcuts Key Action o Source e Edit i Insight r Random h Home s or / Search Close www.joshbeckman.org/notes/452253632