Connect with us

Technology

The goal of Microsoft’s GitHub Copilot is to utilize AI in programming as soon as possible

Published

on

The goal of Microsoft's GitHub Copilot is to utilize AI in programming as soon as possible

There is a lot of disagreement over how much generative AI can assist programmers. David Gewirtz of ZDNET has discovered through first-hand testing that OpenAI’s ChatGPT “can write pretty good code.” However, some research has shown that the overall code quality of large language models, like GPT-4, is far lower than that of human coders.

However, some contend that the argument over whether AI is a better coder or not is missing the mark. According to some, the key to providing coding assistance through automation is to alter the nature of a programmer’s work.

“If you ask me what is the big change, what’s happened with the world of generative AI is that we have created another abstraction layer on top of AI,” said Inbal Shani, chief product officer for GitHub, the developer site owned by Microsoft, in an interview recently with ZDNET.

Originally, the purpose of that abstraction layer—natural language—was limited to code completion. “That’s the basic layer that we’ve seen,” she stated. Shani contends that the abstraction layer’s power lies in its ability to extend AI’s applications far beyond code completion.

GitHub Copilot, their version of code assistance, was released in June 2021. According to Shani, this year has been “a transformational year” for AI in programming. As per the October announcement made by Microsoft CEO Satya Nadella, GitHub has more than 37,000 organizations and over a million paying customers utilizing Copilot.

Shani mentioned well-known Copilot users like Accenture, which has used Copilot to train hundreds of developers. “They’ve seen that there was a lot of usage to reduce what we call boilerplate code, the repetitive code that developers do not necessarily like to write, but have to because it’s part of their foundations.”

Shani stated that Accenture has kept 88.5% of the Copilot code. “So this means that copilot was able to provide a high accuracy — high-fidelity answers to their developers that they choose to keep that code and not need to rewrite it.”

Using Copilot at Accenture has increased productivity by 15%, as measured by the number of pull requests that are completed on time when new code is merged with the main source for a project. Furthermore, the process of turning code into a working binary is known as the “build process,” and “they’ve seen developers more apt to go through it.”

“Sometimes, developers hold themselves back” from doing builds, she noted. “They say, I don’t trust, I need to test again, but using Copilot, it kind of helped build that trust to deploy more code into production.”

More pull requests, more builds, and less boilerplate code writing could all result in small but meaningful improvements in the way developers spend their days right away.

“If we can increase the build rate in a consistent way, then that basically helps developers to spend less time waiting for builds, to have more time back to focus on architecture and so on,” said Shani.

“A shocking discovery that happened for me is that developers have less than two hours a day to write code,” on average, said Shani. “They need to do many things that are around the software development lifecycle, but not around the coding — they do builds, they write tests, they sit in meetings, they need to engage with other folks, they need to write PRs [pull requests].”

One possible benefit of automating some of those tasks is that “we’re giving more bandwidth for developers to invest in the other areas.”

Shani acknowledged that none of this had been fully and rigorously measured in terms of increased productivity. Regarding the productivity measurement process, she stated, “I think we’re in the middle of that.” The likes of Copilot “have not been adopted for long enough for us to get real, substantial data that we can say, here’s how we’ve changed lives forever.”

She said that definitions are difficult for productivity. Since “you can write really crappy code really fast,” code completion is “not necessarily an indicator of success” when it comes to accelerating code.

Rather, said Shani, “the work that we have ongoing is, What is really time to value? What is that impact? How do we measure the impact of these tools that we have been adopting along the way? That’s still ongoing.”

“How to define developer happiness,” according to Shani, is another crucial component that needs to be quantified in some way. “It’s very important for developers to be recognized, and right now, the recognition is coming in some companies from measuring how many lines of code am I writing.” She does, however, point out that a programmer’s verbosity may not be the best measure of their skill.

The elimination of the need to switch between tools is one of the more significant components of the new abstraction layer emerging in AI.

“Usually, if I’m looking for something I don’t know how to write, I’ll go to some sort of search engine,” explained Shani. “Copilot was able to bring all of that into the same environment.” The interface, the prompt, “is right there in your IDE [integrated development environment],” so that “you don’t need to go to different tools, you don’t need to copy-paste, you don’t need to do all that; you basically stay where you write your code.”

The result, according to her, is that “developers are happy because they have less context-switching between tools.”

Within the programming team, Copilot is starting to spread to other departments. According to Shani, one significant Copilot user is the online retailer Shopify, which uses the tool for coding interviews with prospective employees. Additionally, Copilot is being used as a “peer programmer” or educator to help new programmers get up to speed during the onboarding process.

According to Shani, a major factor in the cases where Copilot and comparable tools are unable to yield the desired outcomes is likely prompt engineering’s learning curve. “You still need to know how to ask the right question,” she stated.

“The more you ask a broader question [at the prompt], the more general the solution you’ll get that is not necessarily applicable for your situation,” whereas, “the more you know how to ask the right questions, the better you get an answer from Copilot.”

As for “that change management,” she explained that Microsoft is working with clients like Accenture on “how to think about the question you ask Copilot to get the right answer that is applicable” and “how to write a proper prompt.”

Copilot still needs a lot of development, which will probably have a significant influence on both its accuracy and usefulness. Programs are starting to be able to be “personalized” for a specific developer. “An aspect we’re working on is how we can help these models to understand your coding style,” stated Shani, “to understand which of these elements are critical for you as a software developer, to adjust the recommendations we give you.”

An enterprise version of Copilot will be generally available from GitHub in February. “This is specifically about more customized models for enterprises that want to have their own flavor of that implementation,” Shani stated.

In the business version, “you’re going to have the ability to summarize PRs or add comments to the code using Copilot, or search your documents and get that document you’re looking for.” Additionally, more attention will be paid to how Copilot handles testing and stress testing.

According to Shani, the main goal is to “centralize everything with the same kind of AI flow model across software development, from inception to production.”

The chipmaker Advanced Micro Devices is among the enterprise edition beta testers, primarily for optimizing AMD’s in-house generative AI models. “We have a long waiting list of more customers that want to enter,” she said. “We’re taking it through a lot of rigorous testing, and we want to get a lot of feedback from customers that are currently on our beta program before we feel confident to share.”

Speaking of developer happiness may seem odd considering that some have claimed programming jobs can be eliminated by using AI to automate code. But Shani is adamant that’s not the case. “It’s not going to replace developers, not in the next, I would say, five, ten years,” she stated. “I’m in the camp that says never, because we’re just going to evolve as developers.”

Shani has been working with AI for more than 20 years. A year ago, she joined GitHub and ran the Elastic Containers product at Amazon AWS. She talks about her own experience transitioning from Fortran to C++ to Java to Python as a programmer. “At every point in time, everyone was freaking out: oh, my God, this is going to take away the work of developers.”

But, “We’ve seen more increase in developers because now we have lowered the barrier to be able to write more software.”

In the meantime, Shani compares the development of AI Copilots to “the same industrial revolution that led to factories that scaled food production to meet demand.” “That’s what’s taking place now: there’s more demand for software, so there’s more demand for software developers.”

Could Copilot and similar software actually reduce the time it takes to develop a project if accurate code generation can be automated and if context switching can be minimized by the abstraction layer?

Programmer Fred Brooks noted in his book The Mythical Man-Month that merely adding resources to a large programming project did not always expedite it; in fact, most of the time, it made matters worse.

It’s unclear yet if artificial intelligence (AI) will significantly improve project scheduling and management or lower the overall amount of work needed for a big programming project.

“I don’t know if the concept of many months will turn to seconds,” Shani replied. “Things will still take the right time to mature, but I think that the way to get there will be smoother and more efficient along the way if we can get to that value that we’re looking for in a shorter period of time.”

Technology

Techno and IBM Watsonx is a New Era of Reliable AI Announced by Mahindra

Published

on

Together with IBM, Tech Mahindra, a global leader in digital solutions and technology consulting, is assisting organizations in accelerating the adoption of generative AI in a sustainable manner around the globe.

This partnership combines IBM’s Watsonx AI and data platform with AI Assistants with Tech Mahindra’s array of AI products, TechM amplifAI0->∞.

Customers may now access a range of new generative AI services, frameworks, and solution architectures by combining Tech Mahindra’s AI engineering and consulting talents with IBM Watsonx’s capabilities. This makes it possible to create AI programs that let businesses automate operations using their reliable data. Additionally, it gives companies a foundation on which to build reliable AI models, encourages explainability to help control bias and risk, and permits scalable AI deployment in on-premises and hybrid cloud settings.

Chief digital services officer of Tech Mahindra Kunal Purohit says that in order to revitalize businesses, organizations should prioritize responsible AI practices and the integration of generative AI technology.

“Our partnership with IBM can facilitate digital transformation for businesses, the uptake of GenAI, modernization, and ultimately business expansion for our international clientele,” Purohit continued.

Tech Mahindra has created an operational virtual Watsonx Center of Excellence (CoE) to better improve business skills in AI. Using their combined competencies to produce unique offers and solutions, this CoE serves as a co-innovation center, with a dedicated team tasked with optimizing synergies between the two organizations.

The collaborative offerings and solutions developed through this partnership could help enterprises achieve their goals of constructing machine learning models using open-source frameworks while also enabling them to scale and accelerate the impact of generative AI. These AI-driven solutions have the potential to aid organisations enhance efficiency and productivity responsibly.

IBM Ecosystem General Manager Kate Woolley emphasized the potential of the partnership and added that, when generative AI is developed on a basis of explainability, openness, and trust, it may act as a catalyst for innovation and open up new market opportunities.

“Our partnership with Tech Mahindra is anticipated to broaden Watsonx’s user base and enable even more clients to develop reliable AI as we strive to integrate our know-how and technology to support enterprise use cases like digital labor, code modernization, and customer support,” stated Woolley.

This partnership is in line with Tech Mahindra’s ongoing efforts to revolutionize businesses through cutting-edge AI-led products and services. Some of their most recent offerings include Evangelize Pair Programming, Imaging amplifAIer, Operations amplifAIer, Email amplifAIer, Enterprise Knowledge Search, and Generative AI Studio.

The two businesses had previously worked together, which is noteworthy. On the company’s Singapore site, Tech Mahindra had announced earlier this year that it would be opening a Synergy Lounge in partnership with IBM. For APAC organizations, this lounge aims to expedite the adoption of digital. Technology like as artificial intelligence (AI), intelligent automation, edge computing, 5G, hybrid cloud, and cybersecurity can all be effectively implemented and utilized with its assistance.

In addition to Tech Mahindra, IBM Watsonx has been applied in other partnerships to expedite the application of generative artificial intelligence. Early in the year, the GSMA and IBM also announced a new cooperation to develop the GSMA Foundry Generative AI program and GSMA Advance’s AI Training program, respectively, to boost the use and capabilities of generative AI in the telecom industry.

The program is also available digitally, and it covers the technical underpinnings of generative AI in addition to its business strategy. For architects and developers looking for in-depth, useful expertise on generative AI, this program employs IBM Watsonx to deliver hands-on training.

Continue Reading

Technology

OpenAI Enhances ChatGPT with Google Drive Integration, Streamlined File Access, and Advanced Analytics

Published

on

A major update to ChatGPT was released by OpenAI, enabling users to analyze data straight from OneDrive and Google Drive without having to download and upload. Over the following few weeks, this new feature—which is only available to ChatGPT subscribers who have paid—will be gradually added to the service with the goal of streamlining data analysis and saving customers time and trouble.

According to a blog post by OpenAI, “ChatGPT is now more connected to your data than ever before.” “With the integration of Google Drive and OneDrive, you can directly access and analyse your files – from Excel spreadsheets to PowerPoint presentations – within the chatbot.”

According to OpenAI, ChatGPT can analyze files “more quickly” because to this direct access, which is available to ChatGPT Plus, Enterprise, and Teams users. However, GPT-4o, the improved version of GPT-4 that powers ChatGPT’s premium tiers, is presently the only way to access the additional data analytics tools.

OpenAI has enhanced ChatGPT’s comprehension and manipulation of data, going beyond simple file access. Now, a variety of data-related operations may be carried out by users using natural language commands, such as:

  • Executing analytics-related Python code
  • Combining and streamlining datasets
  • Producing graphs with data from files

Additionally, ChatGPT’s charting capabilities have improved significantly. Now, users may expand their views, engage with the created tables and charts, and personalize the visualisations by altering the colors, posing queries about particular cells, and more. With the exception of several chart types, the chatbot can now create static versions of interactive bar, line, pie, and scatter plot charts.

Additionally, OpenAI emphasized the security of user data. Users of ChatGPT Teams and Enterprise will not have their data used to train AI models, and ChatGPT Plus members have the option to disable this capability.

Continue Reading

Technology

India is the Most Adopting Country in Asia Pacific for Generative AI

Published

on

India’s use of Generative AI (GenAI) is demonstrated in a research produced by Deloitte titled Generative AI in Asia Pacific: Young Employees Lead as Employers Play Catch-Up. Out of 13 nations, India ranks first in terms of the use and adoption of GenAI, according to a poll conducted among 11,900 people in Asia Pacific. It is astounding to learn that 83% of Indian workers and 93% of students actively use this technology.

India has a good adoption rate of GenAI, which is driven by youthful, tech-savvy workers known as “Generation AI.” These young employees are increasing productivity, learning new skills, managing workloads, and saving time by utilizing GenAI. Employers are facing new opportunities and problems as a result of this shift.

The study estimates that within the following five years, everyday utilization of GenAI would rise by 182%. The belief that GenAI can increase the Asia-Pacific region’s contribution to the global economy is reflected in this growth. Eighty-three percent of Indians think it improves social results, and about seventy-five percent think it has economic benefits.

Important Discoveries:

  • Though only 50% of workers and students in Asia Pacific think their bosses are aware of their use, they are driving the GenAI revolution.
  • Seventeen percent of Asia Pacific’s working hours, or around 1.1 billion hours a year, could be impacted by GenAI.
  • More rapidly than industrialized economies, developing nations are implementing GenAI at a rate of thirty percent.
  • Around 6.3 hours are saved weekly by GenAI users in Asia Pacific, while 7.85 hours are saved by Indian users.
  • Work-life balance has been enhanced, according to 41% of time-saving GenAI users.
  • As per the staff of these businesses, seventy-five percent of them have not adopted GenAI yet.

The AI and data capability leader for Deloitte Asia Pacific, Chris Lewin, stated, “One of the most exciting things about working with GenAI is that it is happening to everything, everywhere, all at once, across the globe.” “Over the past twelve months, we have observed that teams in Italy and Ireland can very immediately relate to the issues that our clients in Indonesia or India are facing.” A crucial insight is that while the swift integration of AI won’t result in the immediate loss of jobs, companies that don’t adjust will bear the consequences. Competing companies that provide AI solutions that have the potential to completely change the nature of modern work will attract their employees, especially fresh talent.

Continue Reading

Trending

error: Content is protected !!