Connect with us

Technology

America Supports Ethical AI Measures for International Militaries

Published

on

Leading international efforts to establish robust standards that support the military’s responsible application of AI and autonomous systems is the US government. The government first unveiled the “Political Declaration on Responsible Military Use of Artificial Intelligence and Autonomy”  at The Hauge on February 16, and the State Department said last week that 47 states had now signed on.

Artificial Intelligence (AI) is the ability of machines to do tasks like pattern recognition, learning from experience, inference, prediction, and recommendation generation that would otherwise require human intelligence.

In addition to weapons, military AI capabilities include decision support systems that assist defense leaders at all levels in making quicker and more informed decisions in both the battlefield and the boardroom. These systems cover a wide range of areas, from finance, payroll, and accounting to hiring, promoting, and retaining personnel to gathering and combining data related to intelligence, surveillance, and reconnaissance.

“The United States has been a global leader in responsible military use of AI and autonomy, with the Department of Defense championing ethical AI principles and policies on autonomy in weapon systems for over a decade. The political declaration builds on these efforts. It advances international norms on responsible military use of AI and autonomy, provides a basis for building common understanding, and creates a community for all states to exchange best practices,” said Sasha Baker, under secretary of defense for policy.

The Data, Analytics, and AI Adoption Strategy, which was released on November 2, is the most recent policy on military AI and autonomy that the Defense Department has published, setting the standard for the rest of the world.

The declaration is made up of a number of non-binding recommendations outlining the best ways to use artificial intelligence (AI) responsibly in the military. According to these guidelines, military AI systems must be auditable, have uses that are clear and well-defined, go through a rigorous testing and evaluation process throughout their lifecycle, be able to recognize and prevent unintended behaviors, and have senior-level review for high-impact applications.

As the State Department’s press release on November 13 states: “This groundbreaking initiative contains 10 concrete measures to guide the responsible development and use of military applications of AI and autonomy. The declaration and the measures it outlines, are an important step in building an international framework of responsibility to allow states to harness the benefits of AI while mitigating the risks. The U.S. is committed to working together with other endorsing states to build on this important development.”

The ten actions are as follows:

In order to ensure the responsible development, deployment, and application of AI capabilities, states should make sure that their military organizations embrace and apply these principles.

To make sure that the use of their military AI capabilities complies with their respective responsibilities under international law, especially international humanitarian law, states should take the necessary actions, such as legal reviews.

States ought to think about how they can better apply international humanitarian law and safeguard civilians and civilian property during armed conflict by utilizing military AI capabilities.

States ought to guarantee that high ranking officials efficiently and suitably supervise the  advancement and implementation of military artificial intelligence (AI) capabilities that carry significant implications, encompassing, but not restricted to, weapon systems.

Governments ought to be proactive in reducing inadvertent prejudice in military AI systems.

States ought to make certain that the right people take the necessary precautions when developing, implementing, and utilizing military AI capabilities, including weapon systems that use such capabilities.

It is imperative for states to guarantee that their military AI capabilities are developed using transparent methodologies, data sources, design procedures, and documentation that can be audited by pertinent defense personnel.

In order to reduce the risk of automation bias and enable personnel to make appropriate, context-informed decisions about the use of military AI capabilities, states should guarantee that personnel who use or approve the use of such capabilities receive adequate training.

It is imperative for states to guarantee that military artificial intelligence (AI) capabilities are intended for specific purposes and that their engineering and design support these goals.

States should make sure that military AI capabilities are put through appropriate, rigorous testing and assurance procedures for their safety, security, and efficacy throughout their whole life cycles and intended applications. For military AI capabilities that are self-learning or constantly updating, states should make sure that crucial safety features have not been compromised by procedures like monitoring.

States ought to put in place suitable measures to lessen the likelihood that military AI will malfunction. These measures should include the capacity to recognize and prevent unexpected outcomes as well as the ability to react—for instance, by turning off or disengaging deployed systems—whenever these systems exhibit unexpected behavior.

Technology

Microsoft Expands Copilot Voice and Think Deeper

Published

on

Microsoft Expands Copilot Voice and Think Deeper

Microsoft is taking a major step forward by offering unlimited access to Copilot Voice and Think Deeper, marking two years since the AI-powered Copilot was first integrated into Bing search. This update comes shortly after the tech giant revamped its Copilot Pro subscription and bundled advanced AI features into Microsoft 365.

What’s Changing?

Microsoft remains committed to its $20 per month Copilot Pro plan, ensuring that subscribers continue to enjoy premium benefits. According to the company, Copilot Pro users will receive:

  • Preferred access to the latest AI models during peak hours.
  • Early access to experimental AI features, with more updates expected soon.
  • Extended use of Copilot within popular Microsoft 365 apps like Word, Excel, and PowerPoint.

The Impact on Users

This move signals Microsoft’s dedication to enhancing AI-driven productivity tools. By expanding access to Copilot’s powerful features, users can expect improved efficiency, smarter assistance, and seamless integration across Microsoft’s ecosystem.

As AI technology continues to evolve, Microsoft is positioning itself at the forefront of innovation, ensuring both casual users and professionals can leverage the best AI tools available.

Stay tuned for further updates as Microsoft rolls out more enhancements to its AI offerings.

Continue Reading

Technology

Google Launches Free AI Coding Tool for Individual Developers

Published

on

Google Launches Free AI Coding Tool for Individual Developers

Google has introduced a free version of Gemini Code Assistant, its AI-powered coding assistant, for solo developers worldwide. The tool, previously available only to enterprise users, is now in public preview, making advanced AI-assisted coding accessible to students, freelancers, hobbyists, and startups.

More Features, Fewer Limits

Unlike competing tools such as GitHub Copilot, which limits free users to 2,000 code completions per month, Google is offering up to 180,000 code completions—a significantly higher cap designed to accommodate even the most active developers.

“Now anyone can easily learn, generate code snippets, debug, and modify applications without switching between multiple windows,” said Ryan J. Salva, Google’s senior director of product management.

AI-Powered Coding Assistance

Gemini Code Assist for individuals is powered by Google’s Gemini 2.0 AI model and offers:
Auto-completion of code while typing
Generation of entire code blocks based on prompts
Debugging assistance via an interactive chatbot

The tool integrates with popular developer environments like Visual Studio Code, GitHub, and JetBrains, supporting a wide range of programming languages. Developers can use natural language prompts, such as:
Create an HTML form with fields for name, email, and message, plus a submit button.”

With support for 38 programming languages and a 128,000-token memory for processing complex prompts, Gemini Code Assist provides a robust AI-driven coding experience.

Enterprise Features Still Require a Subscription

While the free tier is generous, advanced features like productivity analytics, Google Cloud integrations, and custom AI tuning remain exclusive to paid Standard and Enterprise plans.

With this move, Google aims to compete more aggressively in the AI coding assistant market, offering developers a powerful and unrestricted alternative to existing tools.

Continue Reading

Technology

Elon Musk Unveils Grok-3: A Game-Changing AI Chatbot to Rival ChatGPT

Published

on

Elon Musk Unveils Grok-3: A Game-Changing AI Chatbot to Rival ChatGPT

Elon Musk’s artificial intelligence company xAI has unveiled its latest chatbot, Grok-3, which aims to compete with leading AI models such as OpenAI’s ChatGPT and China’s DeepSeek. Grok-3 is now available to Premium+ subscribers on Musk’s social media platform x (formerly Twitter) and is also available through xAI’s mobile app and the new SuperGrok subscription tier on Grok.com.

Advanced capabilities and performance

Grok-3 has ten times the computing power of its predecessor, Grok-2. Initial tests show that Grok-3 outperforms models from OpenAI, Google, and DeepSeek, particularly in areas such as math, science, and coding. The chatbot features advanced reasoning features capable of decomposing complex questions into manageable tasks. Users can interact with Grok-3 in two different ways: “Think,” which performs step-by-step reasoning, and “Big Brain,” which is designed for more difficult tasks.

Strategic Investments and Infrastructure

To support the development of Grok-3, xAI has made major investments in its supercomputer cluster, Colossus, which is currently the largest globally. This infrastructure underscores the company’s commitment to advancing AI technology and maintaining a competitive edge in the industry.

New Offerings and Future Plans

Along with Grok-3, xAI has also introduced a logic-based chatbot called DeepSearch, designed to enhance research, brainstorming, and data analysis tasks. This tool aims to provide users with more insightful and relevant information. Looking to the future, xAI plans to release Grok-2 as an open-source model, encouraging community participation and further development. Additionally, upcoming improvements for Grok-3 include a synthesized voice feature, which aims to improve user interaction and accessibility.

Market position and competition

The launch of Grok-3 positions xAI as a major competitor in the AI ​​chatbot market, directly challenging established models from OpenAI and emerging competitors such as DeepSeek. While Grok-3’s performance claims are yet to be independently verified, early indications suggest it could have a significant impact on the AI ​​landscape. xAI is actively seeking $10 billion in investment from major companies, demonstrating its strong belief in their technological advancements and market potential.

Continue Reading

Trending

error: Content is protected !!