share_log

TensorOpera and Qualcomm Technologies Join Forces to Empower AI Developers With Cutting-Edge Generative AI Solutions

Businesswire ·  05/16 07:00

PALO ALTO, Calif.--(BUSINESS WIRE)--#generativeAI--TensorOpera, Inc., the company providing "Your Generative AI Platform at Scale," has announced a technology collaboration with Qualcomm Technologies, Inc. to deliver solutions to enable artificial intelligence (AI) developers to build, deploy, and scale generative AI applications. Pairing the company's TensorOpera AI Platform with Qualcomm Cloud AI 100 inference solutions from Qualcomm Technologies, developers will be able to harness Qualcomm Technologies' advanced AI technologies featured on TensorOpera AI Platform.


The rapid growth of powerful open-source foundation models, along with the availability of faster and more affordable AI hardware, has encouraged many enterprises—from startups to large companies—to develop their own generative AI applications, providing greater privacy, control, and ownership. However, many encounter challenges with complex generative AI software stacks, infrastructure management, and high computational costs for scaling and bringing their applications to production.

To help address these challenges, developers can look to TensorOpera's AI Platform comprehensive stack designed to simplify the complexities of generative AI development. With the Cloud AI 100's ability to facilitate distributed intelligence from the cloud to client edge, as well as its industry-leading energy efficiency, portability and flexibility, the TensorOpera AI Platform will be able to provide exceptional performance-per-dollar and cost efficiency making it an attractive choice for developers and enterprises.

AI developers now have the opportunity to access Cloud AI 100 instances on the TensorOpera AI Platform, designed to enable the use of popular generative AI models, including Llama3 by Meta and Stable Diffusion by Stability AI. They can choose from various usage models, including API access, on-demand (pay-as-you-go), or dedicated deployments, while leveraging many capabilities like autoscale, comprehensive endpoint monitoring, optimized job scheduling, and AI Agent creation.

Salman Avestimehr, co-founder and CEO of TensorOpera, expressed enthusiasm about the technology collaboration: "We are thrilled to work with Qualcomm Technologies. It expands the compute options for AI developers on our AI Platform. Our work together also aligns with our shared long-term vision of integrated edge-cloud platforms, which we believe will drive widespread adoption of generative AI. In line with this vision, TensorOpera will soon launch its new foundation model optimized for smartphones and edge devices. Integrated into the TensorOpera AI Platform, this model enables the development of powerful AI agents directly on mobile devices—a field where Qualcomm has significantly invested by delivering high-performance, efficient compute chips for smartphones."

"With the explosion of new generative AI models, developers around the world are hungry for easy, effective access to high-performance AI inference for deployment," said Rashid Attar, Vice President, Cloud Computing, Qualcomm Technologies, Inc. "By combining TensorOpera's AI Platform with Qualcomm Technologies' Cloud AI 100, developers now have immediate access to deploy the most popular GenAI/Large Language Models - Llama3, Mistral, SDXL - at the push of a button. We are excited to collaborate with TensorOpera to deliver a high-performance inference platform that offers exceptional value and convenience to developers."

This technology collaboration represents a significant step towards accelerating generative AI deployment and creating new opportunities for innovation. The combined strengths of TensorOpera's AI Platform and Qualcomm Technologies' Cloud AI 100 inferencing solutions will drive progress in AI applications across multiple industries. We invite AI developers and enterprises to explore the services offered by TensorOpera and Qualcomm Technologies. Get started today by visiting to apply for early access to Qualcomm-TensorOpera dedicated or serverless model endpoints.

About TensorOpera, Inc.

TensorOpera, Inc. (formerly FedML, Inc.) is an innovative AI company based in Silicon Valley, specifically Palo Alto, California. TensorOpera specializes in developing scalable and secure AI platforms tailored for enterprises and developers with two flagship products:

(1) TensorOpera AI Platform: Accessible at TensorOpera.ai, this platform serves as a comprehensive generative AI ecosystem. It features robust tools for enterprise AI platforms, model deployment, model serving, AI agent APIs, and more. It supports launching training and inference jobs on a serverless/decentralized GPU cloud, experimental tracking for distributed training, and enhanced security and privacy measures.

(2) TensorOpera FedML: Available at FedML.ai, this platform is a leader in federated learning and analytics, supporting zero-code implementation. It includes a lightweight, cross-platform Edge AI SDK suitable for edge GPUs, smartphones, and IoT devices. Additionally, it offers a user-friendly MLOps platform to streamline decentralized machine learning and deployment in real-world applications.

Founded in February 2022, TensorOpera has quickly grown to support a large number of enterprises and developers worldwide.

Qualcomm Cloud AI and Qualcomm AI Stack are products of Qualcomm Technologies, Inc., and/or its subsidiaries.


Contacts

contact@tensoropera.com

声明:本内容仅用作提供资讯及教育之目的,不构成对任何特定投资或投资策略的推荐或认可。 更多信息
    抢沙发