|
48 | 48 | <section id="blogs-publications"> |
49 | 49 | <h1>Blogs & Publications<a class="headerlink" href="#blogs-publications" title="Link to this heading"></a></h1> |
50 | 50 | <ul class="simple"> |
| 51 | + <li><p><a class="reference external" href="https://www.intel.com/content/www/us/en/developer/articles/technical/optimize-stable-diffusion-upscaling-with-pytorch.html">Optimize Stable Diffusion Upscaling with Diffusers and PyTorch*, Sep 2024</a></p></li> |
| 52 | + <li><p><a class="reference external" href="https://www.intel.com/content/www/us/en/developer/articles/technical/intel-ai-solutions-support-meta-llama-3-1-launch.html">Intel AI Solutions Boost LLMs: Unleashing the Power of Meta* Llama 3.1, Jul 2024</a></p></li> |
| 53 | + <li><p><a class="reference external" href="https://www.intel.com/content/www/us/en/developer/articles/technical/intel-ai-solutions-accelerate-alibaba-qwen2-llms.html">Optimization of Intel® AI Solutions for Alibaba Cloud* Qwen2 Large Language Models, Jun 2024</a></p></li> |
| 54 | + <li><p><a class="reference external" href="https://www.intel.com/content/www/us/en/developer/articles/technical/accelerate-meta-llama3-with-intel-ai-solutions.html">Accelerate Meta* Llama 3 with Intel® AI Solutions, Apr 2024</a></p></li> |
| 55 | + <li><p><a class="reference external" href="https://www.intel.com/content/www/us/en/developer/articles/technical/optimize-text-and-image-generation-using-pytorch.html">Optimize Text and Image Generation Using PyTorch*, Feb 2024</a></p></li> |
| 56 | + <li><p><a class="reference external" href="https://pytorch.org/blog/ml-model-server-resource-saving/">ML Model Server Resource Saving - Transition From High-Cost GPUs to Intel CPUs and oneAPI powered Software with performance, Oct 2023</a></p></li> |
51 | 57 | <li><p><a class="reference external" href="https://www.intel.com/content/www/us/en/developer/articles/news/llama2.html">Accelerate Llama 2 with Intel AI Hardware and Software Optimizations, Jul 2023</a></p></li> |
52 | 58 | <li><p><a class="reference external" href="https://www.intel.com/content/www/us/en/developer/articles/technical/accelerate-pytorch-training-inference-on-amx.html">Accelerate PyTorch* Training and Inference Performance using Intel® AMX, Jul 2023</a></p></li> |
53 | 59 | <li><p><a class="reference external" href="https://networkbuilders.intel.com/solutionslibrary/intel-deep-learning-boost-intel-dl-boost-improve-inference-performance-of-hugging-face-bert-base-model-in-google-cloud-platform-gcp-technology-guide">Intel® Deep Learning Boost (Intel® DL Boost) - Improve Inference Performance of Hugging Face BERT Base Model in Google Cloud Platform (GCP) Technology Guide, Apr 2023</a></p></li> |
54 | 60 | <li><p><a class="reference external" href="https://www.youtube.com/watch?v=Id-rE2Q7xZ0&t=1s">Get Started with Intel® Extension for PyTorch* on GPU | Intel Software, Mar 2023</a></p></li> |
55 | | - <li><p><a class="reference external" href="https://www.intel.com/content/www/us/en/developer/articles/technical/accelerate-pytorch-int8-inf-with-new-x86-backend.html">Accelerate PyTorch* INT8 Inference with New “X86” Quantization Backend on X86 CPUs, Mar 2023</a></p></li> |
| 61 | + <li><p><a class="reference external" href="https://www.intel.com/content/www/us/en/developer/articles/technical/accelerate-pytorch-int8-inf-with-new-x86-backend.html">Accelerate PyTorch* INT8 Inference with New "X86" Quantization Backend on X86 CPUs, Mar 2023</a></p></li> |
56 | 62 | <li><p><a class="reference external" href="https://huggingface.co/blog/intel-sapphire-rapids">Accelerating PyTorch Transformers with Intel Sapphire Rapids, Part 1, Jan 2023</a></p></li> |
57 | 63 | <li><p><a class="reference external" href="https://networkbuilders.intel.com/solutionslibrary/intel-deep-learning-boost-improve-inference-performance-of-bert-base-model-from-hugging-face-for-network-security-technology-guide">Intel® Deep Learning Boost - Improve Inference Performance of BERT Base Model from Hugging Face for Network Security Technology Guide, Jan 2023</a></p></li> |
58 | 64 | <li><p><a class="reference external" href="https://www.youtube.com/watch?v=066_Jd6cwZg">Scaling inference on CPUs with TorchServe, PyTorch Conference, Dec 2022</a></p></li> |
|
0 commit comments