Connect with us

Technology

California Intends to Employ AI to Respond to Your Tax Inquiries

Published

on

California Intends to Employ AI to Respond to Your Tax Inquiries

This time of year, the California tax office is always buzzing with activity as hundreds of thousands of residents and businesses seek tax advice. Phones ring and keyboards clack.

Call volume can reach up to 10,000 per day, quadrupling the average wait time from four minutes to twenty. Chief of the call center Thor Dunn stated, “When the bell rings at 7:30 you (already) have a wait.” He also mentioned that employees with other occupations are trained to pick up the phone during busy times. “Everyone is on deck.”

Therefore, California’s 3,696-person Department of Tax and Fee Administration intends to employ generative artificial intelligence to assist its about 375 call center agents on state tax code later this year—for the upcoming tax season. The information they provide to California business owners seeking tax advice will then be informed by the AI.

Generative artificial intelligence models, trained on vast datasets frequently stolen from the internet without authors’ permission, are capable of producing text, image, and audio material. With its debut in fall 2022, OpenAI’s ChatGPT, a big language model, sparked interest in generative AI by effectively predicting the next word in a sequence of input text and then producing or generating text that represents the training data.

For you, the individual phoning the tax center, what form would that be? Although a slide in the tax department’s request for proposals requesting a vendor states that any AI solution must “be able to provide responses to incoming voice calls, live chats, and other communications,” the tax department informed CalMatters that this technology will not be used without a call center employee present to review the answer.

That call for proposals was issued last month with the goal of using AI to assist the state with taxes. This week is the deadline for initial proposals, and the process should be completed by April. 100 people attended a meeting with possible vendors last month, according to department spokesperson Tamma Adamek, who talked with CalMatters.

The tax guidance Acting head of the California Department of Technology and state chief information officer Liana Bailey-Crimmins says the AI proposal is one of five proofs of concept the state has started investigating how state agencies can employ generative AI. The state’s Health and Human Services Agency is conducting two trials to investigate whether generative AI can facilitate public benefit understanding and attainment, as well as aid in health care facility inspections. Caltrans is also working on two projects to investigate whether generative AI can lessen traffic congestion and deadly accidents.

The vendor who wins the AI tax proposal will be awarded a six-month contract; after that, state representatives will decide whether or not to award a longer one. In addition to vendors having to “monitor and report on GenAI solution responses for factual accuracy, coherence, and appropriateness,” the initiative needs to show reduced call times, wait times, and abandoned calls.

The initiative marks the beginning of an iterative, multi-year process for AI regulation and implementation, which was initiated by Governor Gavin Newsom last autumn. By July, governmental agencies must investigate the use of generative AI, according to the executive order he issued.

Contract-awarded private enterprises will train AI models in a “sandbox” situated on state servers, designed to adhere to information security and monitoring guidelines established by the technology department, in order to reduce risks. The IT department is required under Newsom’s executive order to make the sandbox available for usage by contract-awarded companies in March.

AI Risk Assessment

In November 2023, the Government Operations Agency of the state assessed the advantages and disadvantages of generative AI. The paper issues a warning, stating that generative models may yield plausible but erroneous results, provide distinct responses to the same question, and experience model collapse when predictions deviate from true outcomes. The use of generative AI also runs the risk of automation bias, which occurs when users become unduly dependent on and trusting of automated decision-making.

It’s unclear exactly how call center staff for tax agencies will identify which responses from massive language models to believe.

According to Adamek, the tax department’s spokesperson, they receive training on fundamental tax and fee programs and are able to seek assistance from more seasoned team members when they have questions about a particular topic. The technology department is slated to assist in training state personnel on identifying incorrect or fraudulent text in July, working with other state departments.

According to Adamek, the tax department does not view its intended use of generative AI as high risk because it is primarily concerned with improving state business processes and all relevant data is accessible to the general public. Later in the procedure, the tax department will evaluate risk, according to her. In the upcoming weeks, standards guidelines for state entities that enter into contracts with private enterprises are scheduled to be released.

The technology department may not agree, but the tax department does not view the use of generative AI as highly risky.

According to Newsom’s directive, all state agencies must provide the Department of Technology with a list of the high-risk generative AI applications they are utilizing in less than 60 days. CalMatters was informed by Bailey-Crimmins that none of the governor’s agencies are utilizing high risk generative AI.

A new rule mandates that by September, at the latest, the technology department must catalog all high-risk AI applications and automated decision making systems used by state entities.

However, some people outside of government are concerned about some of California’s AI initiatives. Among them is Justin Klozcko, a Los Angeles-based author of the Consumer Watchdog report Hallucinating Risk, which explores the possible risks associated with AI patents held by banks and used in financial services. He points out that OpenAI, the San Francisco-based company that created ChatGPT, has issued warnings in its documentation that using AI to provide financial advice or offer basic services carries a significant risk.

“There’s still a lot we don’t know about generative AI and what we do know is that it makes mistakes and acts in ways that people who study it don’t even fully understand,” Klozcko said. He also questioned the ease of determining whether that information is accurate in the hands of the call center employee who may not be qualified to determine whether text output by a large language model — made to sound convincing — is in fact inaccurate or false.

“I worry that workers in charge of this won’t understand the complexity of this AI,” he said. “They won’t know when they’re led astray.”

“We take those risks seriously,” according to Bailey-Crimmins, who also stated that possible drawbacks will be taken into account when deciding what to do after the six-month trial project.

“We want to be excited about benefits, but we also need to make sure that what we’re doing is safeguarding… the public puts a lot of trust in us and we need to make sure that the decisions we’re making (are) not putting that trust in question.”

Technology

Verituity Secures $18.8 Million for Expansion of AI-Driven Verified Payout Platform

Published

on

In order to finance the expansion of its verified payout platform for businesses and consumers, Verituity has raised $18.8 million.

According to a press release from Verituity on Friday, June 21, the company plans to use the additional funds to expand into new markets like mortgage servicing and energy, enhance its growth in the banking and insurance sectors, and continue developing the machine learning (ML) and artificial intelligence (AI) models that underpin the platform.

According to the press release, Ben Turner, CEO of Verituity, “orchestrates billions of dollars in verified B2B and B2C payouts by empowering businesses and banks to deliver trusted and intelligent payments on-time to known individuals and businesses.” “As we continue on our journey to ultimately do away with checks and integrate intelligent, verified payouts into the very fabric of business disbursements, I look forward to working with our investors.”

According to the statement, the company’s technology adds intelligence to each disbursement and knows and validates every payer, payee, account, and transaction.

According to the release, doing so reduces risks, maximizes payout economics, and guarantees that digital payments are made on schedule, to the correct payee and payment account, and from the correct funding account.

Sandbox Industries and Forgepoint Capital spearheaded the company’s most recent round of funding.

According to a press statement from Sandbox Industries, Chris Zock, managing partner and co-CEO, Verituity’s “unique approach to embedding verification into payouts and handling the complexity of connecting legacy treasury systems to digital payments is transformative for the industry—“

Verituity, according to Don Dixon, co-founder and managing director of Forgepoint Capital, is “well positioned to take full advantage of the rapid transformation underway in disbursements” because it combines intelligent payments, trust, and verification.

Verituity and Mastercard partnered in April to allow commercial banks and payers to make payments almost instantly.

Mastercard’s suite of local and international money transfer options, Mastercard Move, is integrated into Verituity’s white-labeled payments platform as part of that partnership. The Verituity platform will be able to provide consumers with fast payee and transaction verification as well as a shorter time to market thanks to this connection.

In a press statement announcing the collaboration, Turner stated, “We’re excited to work with Mastercard to include more banks in the safe disbursement and remittance ecosystem.”

Continue Reading

Technology

Anthropic, an OpenAI Rival, Revealed its Most Potent AI to Date

Published

on

Anthropic, an OpenAI rival, unveiled Claude 3.5 Sonnet, their most potent AI model to date, on Thursday.

Claude is one of the chatbots that has become quite popular in the last year, along with Google’s Gemini and OpenAI’s ChatGPT. Google, Salesforce, and Amazon are among the supporters of Anthropic, which was created by former OpenAI research executives. It has closed five financing arrangements worth a combined $7.3 billion in the last year.

The announcement comes after OpenAI’s GPT-4o in May and Anthropic’s Claude 3 family of models, which debuted in March. Claude 3.5 Sonnet, the first model in Anthropic’s new Claude 3.5 family, is faster than the business’s previous top model, Claude 3 Opus, according to the company.

The company’s Claude.ai website and the Claude iPhone app offer Claude 3.5 Sonnet for free. Higher rate limit models are available to subscribers of Claude Pro and Team.

In addition to creating excellent content in a conversational, natural tone, the system “shows marked improvement in grasping nuance, humor, and complex instructions,” according to a blog post from the business. Code can be written, edited, and run by it as well.

Anthropic also unveiled “Artifacts,” a feature that enables users to instruct its chatbot, Claude, to execute tasks like creating code or text documents, and then view the outcome in a separate window. Code development, business report authoring, and other tasks are anticipated to benefit from Artifacts, according to the company. “This creates a dynamic workspace where they can see, edit, and build upon Claude’s creations in real-time,” the statement continued.

As generative AI startups like Anthropic and OpenAI gain traction, they are competing with tech behemoths like Google, Amazon, Microsoft, and Meta in an arms race to incorporate AI technology and stay ahead of a market that is expected to generate $1 trillion in revenue over the course of the next ten years.

Anthropic debuted its first-ever enterprise product in May, and news of its new model followed.

Anthropic co-founder Daniela Amodei told CNBC last month that the plan for businesses, called Team, had been in development for the past few quarters and involved beta-testing with between 30 and 50 customers in industries like technology, financial services, legal services, and health care. According to Amodei, many of those same customers requested a specific corporate solution, which served as inspiration for the service’s concept.

At the time, Amodei remarked, “So much of what we were hearing from enterprise businesses was that people are kind of using Claude at the office already.”

Mike Krieger, co-founder of Instagram, joined Anthropic as chief product officer last month, not long after the business unveiled its new product. According to a release, Krieger, the former chief technological officer of Meta-owned Instagram, expanded the platform’s user base to 1 billion and boosted the number of engineers on staff to over 450. Jan Leike, a previous leader in safety at OpenAI, also joined the business in May.

Continue Reading

Technology

Materia Unveils GenAI Platform for Public Accounting Firms After Exiting Stealth

Published

on

With more than $6.3 million in funding, Materia has emerged from stealth to introduce a generative artificial intelligence (AI) platform designed especially for public accounting companies.

According to a press release released by the company on Thursday, June 20, the platform’s goal is to give these businesses intelligent technology that will free up time they now spend on numerous low-value, tiresome, daily tasks.

The CEO and co-founder of Materia, Kevin Merlini, stated in the press release that the company was formed to meet this pressing demand for time-saving solutions that would also assist in handling the laborious and heavy lifting associated with daily workflows while maintaining a high standard of accuracy and security.

The press announcement states that the company’s technology compiles internal knowledge from businesses into a safe Knowledge Hub. Thus, it establishes a silo-bridging, structured corporate search layer.

According to the announcement, this hub is then used by the Materia AI Assistant and Document Analysis Workspace, which use the data to give trustworthy data based on proprietary knowledge and recognized accounting standards.

According to the announcement, the platform is made to be adopted in a matter of days, provides responsible AI that is supported by meticulous accuracy testing conducted by CPA subject matter experts, and provides a approach for organizations that require specific customisation or interfaces.

Natalie Sandman, a general partner at Spark Capital, which led the funding, stated in the statement that the company already works with prestigious national firms and that the feedback from these clients has been “overwhelmingly positive.”

According to Sandman, “We think Materia’s AI solution will revolutionize the accounting industry by expediting routine tasks for accounting professionals and enabling them to deliver higher-quality services to their clients more effectively.”

According to PYMNTS Intelligence, chief financial officers (CFOs) are using AI to increase a variety of organizational efficiencies. The requirement for lower-skill personnel has decreased, according to 63% of CFOs, and they now require more individuals with analytical skills, according to 58% of them.

This past March, AI company Fieldguide reported raising $30 million for their accounting sector product, marking another recent fundraising event in this space. CPAs can have more time to work on high-value tasks by using Fieldguide’s AI solution, which can automate workflows and streamline operations.

Continue Reading

Trending

error: Content is protected !!