X

Symbolica Launches with $33M Funding to Transform the AI Industry with Symbolic Models

Today, the artificial intelligence startup Symbolica AI made its debut, introducing a novel method for creating generative AI models.

The company is aiming to tackle the expensive mechanisms behind training and deploying large language models such as OpenAI’s ChatGPT that are based on Transformer architecture.

In addition to that announcement, the company disclosed today that it has raised $33 million in total capital from a seed and Series A funding round headed by Khosla Ventures. Day One Ventures, Abstract Ventures, Buckley Ventures, and General Catalyst were among the other investors.

Transformer deep learning architectures have surpassed all other types, particularly for large language models, as demonstrated by numerous examples such as Google LLC’s Gemini, Anthropic PBC’s Claude, and OpenAI’s ChatGPT. That’s because of their widespread use and the abundance of tools available for their creation and implementation, despite the fact that they are very costly and complex. In addition, they require enormous amounts of energy and data, are challenging to validate, and have a propensity to “hallucinate,” which is the term for when a model confidently presents an incorrect assertion as fact.

Unlike Transformers, which use the contextual and statistical relationships between inputs and learn from previously given content, Symbolica builds AI models through structured models that define tasks through manipulating symbols. Symbolic AI uses symbols to represent a set of rules, which enables them to be pretrained for specific tasks like word processing or coding.

Based on the idea of “categorical deep learning,” the startup applies structured mathematics to define the relationship between symbols. It provided clarification in a paper that it and Google DeepMind co-authored recently. When compared to large, complex unstructured models like GPT, structured models require less overall data and can operate on less computing power because they classify and encode the underlying structure of the data.

“Regime-specific structured reasoning abilities can be generated in significantly smaller models by combining advances in deep learning with a rich mathematical toolbox,” stated George Morgan, CEO of Symbolica, in an interview with TechCrunch.

The company plans to create a toolkit that will enable the creation of “interpretable” models—meaning that users will be able to decipher the reasoning behind the AI network’s decisions. This should increase the transparency of the models, making it much easier for developers to monitor and debug them.

For highly regulated industries like healthcare and finance, where inaccuracy risks could have disastrous consequences, interpretability is essential for developing better AI in the future. To apply transparency for regulatory audits, it is also critical to comprehend an AI’s knowledge base and decision-making process.

The company’s first product, a coding assistant, will launch in early 2025, according to Morgan, who spoke with Reuters. This is because the company needs to build and train its model first.

Categories: Technology
Kajal Chavan:
X

Headline

You can control the ways in which we improve and personalize your experience. Please choose whether you wish to allow the following:

Privacy Settings

All rights received