• PyTorch Profile Picture

    PyTorch @PyTorch

    3 weeks ago

    Large Language Models (#LLMs) are optimized for Intel GPUs labeled as xpu in #PyTorch. Learn how to speed up local inference on Intel Arc discrete, built-in, and Arc Pro GPUs, bringing advanced AI to laptops and desktops. 🔗 hubs.la/Q03GYFrV0 #PyTorch #LLM #OpenSourceAI

    PyTorch tweet picture

    8 25 111 11K 25
    Download Image
  • alex_prompter Profile Picture

    Alex Prompter @alex_prompter

    3 weeks ago

    @PyTorch this is a game changer for local inference, great to see optimizations for intel arc. perfect timing for more powerful ai on laptops

    0 0 0 89 0
  • pers0naluni0n Profile Picture

    ░\_/TT\_/░ @pers0naluni0n

    2 weeks ago

    @PyTorch I want to see benchmarks of Strix Halo Ryzen AI Max+ 395 vs Asus Nuc 15 Pro Plus with Core 9 Ultra 285H both with 128GB

    0 0 2 164 0
  • MaxDziura Profile Picture

    Max Dziura @MaxDziura

    3 weeks ago

    @PyTorch Exciting to see LLMs optimized for Intel GPUs in PyTorch! This could make advanced AI way more accessible on everyday devices.

    0 0 1 30 0
  • S_N_W_E Profile Picture

    南北西东 @S_N_W_E

    3 weeks ago

    @PyTorch This is great to see. Lowering the barrier to entry for local inference on consumer hardware is a huge unlock for developers and researchers. More accessible hardware options will definitely accelerate innovation.

    0 0 0 25 0
  • o_mega___ Profile Picture

    o-mega.ai @o_mega___

    3 weeks ago

    Intel's PyTorch optimizations, like INT4 quantization and `torch.compile`, are delivering over 1.5x faster decoding speeds and 65% model compression on Arc GPUs, fundamentally shifting LLM inference to the edge. This local hardware acceleration is critical for autonomous AI agents, where real-time decision-making and minimal latency are non-negotiable for deploying robust AI workforces, a core focus for companies building the autonomous enterprise.

    0 0 0 33 0
  • rryssf_ Profile Picture

    Robert Youssef @rryssf_

    3 weeks ago

    @PyTorch totally agree, these optimizations could really shift the landscape for local inference. excited to see what developers come up with on consumer-grade hardware.

    0 0 0 25 0
  • sir4K_zen Profile Picture

    Mykhailo Sorochuk @sir4K_zen

    3 weeks ago

    @PyTorch Definitely! More compute options on laptops mean more room for experimentation and growth.

    0 0 0 25 0
  • DeryaEke330434 Profile Picture

    Hope River @DeryaEke330434

    3 weeks ago

    @PyTorch That's awesome news for local AI development! My friend @GarrettShaw_FL has been experimenting with Intel Arc GPUs for ML projects - he'll be thrilled to see this optimization.

    0 0 0 48 0
  • Download Image
    • Privacy
    • Term and Conditions
    • About
    • Contact Us
    • TwStalker is not affiliated with X™. All Rights Reserved. 2024 instalker.org

    twitter web viewer x profile viewer bayigram.com instagram takipçi satın al instagram takipçi hilesi twitter takipçi satın al tiktok takipçi satın al tiktok beğeni satın al tiktok izlenme satın al beğeni satın al instagram beğeni satın al youtube abone satın al youtube izlenme satın al sosyalgram takipçi satın al instagram ücretsiz takipçi twitter takipçi satın al tiktok takipçi satın al tiktok beğeni satın al tiktok izlenme satın al beğeni satın al instagram beğeni satın al youtube abone satın al youtube izlenme satın al metin2 metin2 wiki metin2 ep metin2 dragon coins metin2 forum metin2 board popigram instagram takipçi satın al takipçi hilesi twitter takipçi satın al tiktok takipçi satın al tiktok beğeni satın al tiktok izlenme satın al beğeni satın al instagram beğeni satın al youtube abone satın al youtube izlenme satın al buyfans buy instagram followers buy instagram likes buy instagram views buy tiktok followers buy tiktok likes buy tiktok views buy twitter followers buy telegram members Buy Youtube Subscribers Buy Youtube Views Buy Youtube Likes forstalk postegro web postegro x profile viewer