#算法实习# #大模型# We are currently seeking full-time algorithm interns for efficient LLM inference in the Intel/DCAI/AISE group. The position is based in Shanghai, Zizhu. Working on exciting projects such as INC (Intel Neural Compressor) [https://github.com/intel/neural-compressor] and ITREX (Intel Extension for Transformers) [https://github.com/intel/intel-extension-for-transformers]If you are passionate about this field and would like to apply, please send your resume to to wenhua dot cheng @ intel.com