Josh Beckman (www.joshbeckman.org) Subscribe pruning for sparsity – turns out some LLMs work just as well if you set 60% of the weights to zero (though this likely isn’t true if you’re using Chinchilla-optimal training) Josh Beckman Reference Notesllm, optimization Nathan Labenz on AI Pricing Tyler Cowen 2023, January 08, Sunday Permalink to 2023.NTE.027 Edit ← Previous Next → Note on Nathan Labenz on AI Pricing via Tyler Cowen Note on Edge Cities With and Without Historic Cores via Alon Levy Widgets Comments & Replies via email You can subscribe or follow or reply here: Network Graph Legend Keyboard Shortcuts Key Action o Source e Edit i Insight r Random h Home s or / Search Close www.joshbeckman.org/notes/452253632