Connect with us

Technology

Updates on the FAFSA, the Campus Master plan, and Artificial Intelligence Cabinet of the President

Published

on

Metropolitan State University of Denver has many reasons to be happy now that the spring semester is well underway. To begin the President’s Cabinet meeting on February 29, President Janine Davidson, Ph.D. gave a few highlights:

More than 4% more undergraduates are enrolled than the previous spring.

Retention from fall to spring is up 4.5% from the previous year.

For the first time ever, the endowment of the MSU Denver Foundation gave out more than $1 million in grants last year.

The most alumni have ever been involved in a university’s history, University Advancement has raised $57.4 million for its capital campaign.

The joint budget committee of the state legislature has been recommended to fund the Classroom to Career Hub, Health Institute Tower, and Student Information System capital projects.

With the backing of tools like student dispute resolution services, the ACPD’s First Amendment Assemblies policy, and the University’s freedom of expression policy, the university is still working to foster a culture of safety and communication.

AHEC Master Plan took a break to get more feedback

To get more input from the community, the Auraria Board of Directors has decided to postpone the vote on the Auraria Campus Master Plan that was scheduled for April. A second town hall meeting is scheduled for today at noon, following the one that was held on Thursday.

The importance of Introducing Artificial Intelligence to Students and the Efficiency gains it can bring

The director of Faculty Affairs, Sam Jay, Ph.D., gave a presentation about the University’s upcoming workshops and its investigation of artificial intelligence tools.

According to Jay, artificial intelligence (AI) has the potential to be used in a variety of contexts, including research enhancement, virtual learning environments, automated administrative tasks, curriculum development, and more personalized learning.

Together with colleagues, Jay is creating guidelines to help staff members safeguard institutional data and is formulating suggestions for syllabus wording concerning the use of AI and learning objectives. In addition, Jay is leading a series of workshops that will help all Roadrunners learn more about the tool and apply it safely. The first workshop is scheduled for March 29 from 1 to 2:45 p.m.

Action Plans for Employee Engagement are in Progress

Overall workplace satisfaction is 59% according to the Energize Employee Engagement Survey results, which is 4 percentage points lower than the survey’s January 2022 results.

Over 80% of participants expressed that their work holds significance and that their supervisor is attentive to their issues.

Workers also value the University’s strong values, which give them a sense of belonging, and flexibility that allows for a work-life balance.

Concerns concerning pay, ineffective procedures, and a rift with senior managers are among the areas of potential improvement.

Action plans to address employee concerns specific to their outcomes with Human Resources have already been shared by deans and vice presidents. Throughout the spring semester, they will keep providing updates, and in the upcoming year, the Early Bird will provide an overview of significant areas of development.

Technology

UK Safety Institute Unveils ‘Inspect’: A Comprehensive AI Safety Tool

Published

on

The U.K. Safety Institute, the country’s AI safety authority, unveiled a package of resources intended to “strengthen AI safety.” It is anticipated that the new safety tool will simplify the process of developing AI evaluations for business, academia, and research institutions.

The new “Inspect” program is reportedly going to be released under an open source license, namely an MIT License. Inspect seeks to evaluate certain AI model capabilities. Along with examining the fundamental knowledge and reasoning skills of AI models, it will also produce a score based on the findings.

The “AI safety model”: what is it?

Data sets, solvers, and scores make up Inspect. Data sets will make samples suitable for assessments possible. The tests will be administered by solvers. Finally, scorers are capable of assessing solvers’ efforts and combining test scores into metrics. Furthermore, third-party Python packages can be used to enhance the features already included in Inspect.

As the UK AI Safety Institute’s evaluations platform becomes accessible to the worldwide AI community today (Friday, May 10), experts propose that global AI safety evaluations can be improved, opening the door for safe innovation of AI models.

A Profound Diving

According to the Safety Institute, Inspect is “the first time that an AI safety testing platform which has been spearheaded by a state-backed body has been released for wider use,” as stated in a press release that was posted on Friday.

The news, which was inspired by some of the top AI experts in the UK, is said to have arrived at a pivotal juncture for the advancement of AI. Experts in the field predict that by 2024, more potent models will be available, underscoring the need for ethical and safe AI research.

Industry Reacts

“We are open sourcing our Inspect platform, and I am delighted to say that as Chair of the AI Safety Institute. We believe Inspect may be a foundational tool for AI Safety Institutes, research organizations, and academia. Effective cooperation on AI safety testing necessitates a common, easily available evaluation methodology, said Ian Hogarth, chair of the AI Safety Institute.”

“I have approved the open sourcing of the AI Safety Institute’s testing tool, dubbed Inspect, as part of the ongoing drumbeat of UK leadership on AI safety. The Secretary of State for Science, Innovation, and Technology, Michelle Donelan, stated, “This puts UK ingenuity at the heart of the global effort to make AI safe and cements our position as the world leader in this space.”

Continue Reading

Technology

IBM Makes Granite AI Models Available To The Public

Published

on

IBM Research recently announced it’s open sourcing its Granite code foundation models. IBM’s aim is to democratize access to advanced AI tools, potentially transforming how code is written, maintained, and evolved across industries.

Which Granite Code Models Are Used by IBM?

Granite was born out of IBM’s grand plan to make coding easier. IBM used its extensive research resources to produce a suite of AI-driven tools to help developers navigate the complicated coding environment because it recognized the complexity and rapid innovation inherent in software development.

Its 3 billion to 34 billion parameter Granite code models, which are optimized for code creation, bug fixes, and code explanation, are the result of this endeavor and are meant to improve workflow productivity in software development.

Routine and complex coding activities are automated by the Granite models, which increase efficiency. Developers are able to concentrate on more strategic and creative parts of software design while also expediting the development process. This results in better software quality and a quicker time to market for businesses.

There is also an infinite amount of room for inventiveness. New tools and applications are expected to emerge, some of which may redefine software development norms and practices, given that the community has the ability to alter and expand upon the Granite models.

In addition to 500 million lines of code written in more than 50 programming languages, code snippets, challenges, and descriptions make up the extensive CodeNet dataset that the models are trained on. Because of their substantial training, the models are better able to comprehend and produce code.

Analyst’s Take

The Granite models are designed to increase efficiency by automating complicated and repetitive coding operations. This expedites the development process and frees up developers to concentrate on more strategic and creative areas of software development. Better software quality and a quicker time to market are what this means for businesses.

IBM expands its potential user base and fosters collaborative creation and customization of these models by making these formidable tools accessible on well-known platforms like GitHub, Hugging Face, watsonx.ai, and Red Hat’s RHEL AI.

Furthermore, there is an infinite amount of room for invention. Now that the Granite models are open to community modification and development, new tools and applications are sure to follow, some of which may completely reshape software development norms and practices.

This action has significant ramifications. First off, it greatly reduces the entrance barrier for software developers wishing to use cutting edge AI techniques. Now that independent developers and startups have access to the same potent resources as established businesses, the playing field is leveled and a more dynamic and creative development community is encouraged.

IBM’s strategy not only makes sophisticated coding tools more widely available, but it also creates a welcoming atmosphere for developers with different skill levels and resource capacities.

In terms of competition, IBM is positioned as a pioneer in the AI-powered coding arena, taking direct aim at other IT behemoths that are venturing into related fields but might not have made a commitment to open-source models just yet. IBM’s presence in developers’ daily tools is ensured by making the Granite models available on well-known platforms like GitHub and Hugging Face, which raises IBM’s profile and influence among the software development community.

With the Granite models now available for public use, IBM may have a significant impact on developer productivity and enterprise efficiency, establishing a new standard for AI integration in software development tools.

Continue Reading

Technology

A State-Backed AI Safety Tool Is Unveiled in the UK

Published

on

For artificial intelligence (AI) safety testing, the United Kingdom has unveiled what it refers to as a groundbreaking toolbox.

The novel product, named “Inspect,” was unveiled on Friday, May 10, by the nation’s AI Safety Institute. It is a software library that enables testers, including international governments, startups, academics, and AI developers, to evaluate particular AI models’ capabilities and then assign a score based on their findings.

As per the news release from the institute, Inspect is the first AI safety testing platform that is supervised by a government-backed organization and made available for public usage.

As part of the ongoing efforts by the United Kingdom to lead the field in AI safety, Michelle Donelan, the secretary of state for science, innovation, and technology, announced that the AI Safety Institute’s testing platform, named Inspect, is now open sourced.

This solidifies the United Kingdom’s leadership position in this field and places British inventiveness at the center of the worldwide push to make AI safe.

Less than a month has passed since the US and UK governments agreed to cooperate on testing the most cutting-edge AI models as part of a joint effort to build safe AI.

“AI continues to develop rapidly, and both governments recognize the need to act now to ensure a shared approach to AI safety which can keep pace with the technology’s emerging risks,” the U.S. Department of Commerce said at the time.

The two governments also decided to “tap into a collective pool of expertise by exploring personnel exchanges” between their organizations and to establish alliances with other countries to promote AI safety globally. They also intended to conduct at least one joint test on a publicly accessible model.

The partnership follows commitments made at the AI Safety Summit in November of last year, where world leaders explored the need for global cooperation in combating the potential risks associated with AI technology.

“This new partnership will mean a lot more responsibility being put on companies to ensure their products are safe, trustworthy and ethical,” AI ethics evangelist Andrew Pery of global intelligent automation company ABBYY told PYMNTS soon after the collaboration was announced.

In order to obtain a competitive edge, creators of disruptive technologies often release their products with a “ship first, fix later” mindset. For instance, despite ChatGPT’s negative effects, OpenAI distributed it for widespread commercial use despite being reasonably open about its possible risks.

Continue Reading

Trending

error: Content is protected !!