AV2, a generation leap in open video coding and the answer to the world’s growing streaming demands, delivers significantly better compression performance than AV1. AV2 provides enhanced support for AR/VR applications, split-screen delivery of multiple programs, improved handling of screen content, and an ability to operate over a wider visual quality range. AV2 marks a milestone on the path to an open, innovative future of media experiences.
So AV2 in a homelab is grossly inappropriate as this seems like something more geared up for server farm status. Odds are most homelabs aren’t rocking that kinda hardware and there’s probably an energy consumption cost and hardware cost factor to measure against how much someone is really saving compared to just AV1 which a intel ARC card can handle. You’d probably have to be rocking a homelab with 100s of TB of stuff to even come close to maybe wanting to go to AV2 hardware and codec?
So what makes av2 better than av1
https://aomedia.org/press releases/AOMedia-Announces-Year-End-Launch-of-Next-Generation-Video-Codec-AV2-on-10th-Anniversary/
AV2 vs AV1: What the Next-Gen Video Codec Brings to the Table
Or older article but more comprehensive : AV1 vs AV2 Video Codec: 7 Key Differences You Need to Know. It’s key takeaways :
Ah but split screen delivery may mean plex content on an Apple Vision Pro or something VR related right?
So AV2 in a homelab is grossly inappropriate as this seems like something more geared up for server farm status. Odds are most homelabs aren’t rocking that kinda hardware and there’s probably an energy consumption cost and hardware cost factor to measure against how much someone is really saving compared to just AV1 which a intel ARC card can handle. You’d probably have to be rocking a homelab with 100s of TB of stuff to even come close to maybe wanting to go to AV2 hardware and codec?