EAU CLAIRE — The Eau Claire school board discussed a plan Monday to potentially share a referendum with the city of Eau Claire on the Nov. 8 ballot.
The meeting was a discussion on the idea, and no plans were approved.
In June, the school board heard results of a survey done by accounting firm Baker Tilly with area voters’ opinions on going to referendum. Results of that survey showed that both going to referendum at the same time could hurt each of their chances. The school district’s referendum did see more support from those surveyed than a potential city referendum.
The district would like to see the potential $7.5 million annual increase go towards building upgrades across the district. However, referendum survey results show the public would like it to support student academics and mental health.
While the school board has yet to take any action on the referendum, last week the City Council voted in favor of moving ahead with a fall referendum. From the council’s discussion at a June work session, a referendum would be used to fund new police officers, firefighter/EMS workers and community services employees.
Now, the school board would like to connect with the city to discuss where both parties stand now that it looks like both parties will be on the ballot.
“I think we need to be aware of the city’s work and be in contact with council members,” President Tim Nordin said.
Commissioner Erica Zerr said she felt that the city didn’t really understand where the district was at in terms of what it needs going forward in terms of a dollar amount after the June meeting with Baker Tilly.
Now, the board is discussing opening up communication lines more with the city heading towards November, although no formal plans were made at the meeting,
“It’s going to be an interesting fall now with both of us on the ballot,” Zerr said.
With the City Council giving their referendum the greenlight, the school board has some decisions to make. The board is expected to vote on it in August, deciding if there will be a question on the ballot, and what that question will be.
Communication plan update
Also at Monday’s meeting, the board heard a communication plan update from Zerr.
The board considered adding a monthly standing agenda item for reports specific to the Community Engagement Plan.
Additionally, the board discussed working group recommendations for the plan on its components including implementing school board ambassadors for each school in the district, electing a board member to meet with governmental entities on demand, replacing the social media component of the plan with a rotating newspaper column and creating a key communicator group to help the board better connect with community youth organizations, groups and individuals.
The board had email contact with the Sun Prairie School District, which has implemented many of these recommendations as part of their community outreach plan.
The general goal of the plan is to create a framework to help the board better connect with the broader community. Zerr said this plan is about figuring out how to get the Eau Claire community to engage within the school district and with the board.
“This is for the board to communicate with the public,” Nordin said.
The biggest topic of conversation among the board was the proposal of a key communicator group. The group would be composed of 25-35 individuals representing diverse groups of people from the community.
The group would meet quarterly to discuss agenda items and engage with one another about community and district issues. A few board members would attend the meetings, but not all, according to the current recommendation.
Zerr is a proponent of possibly adopting this component.
“It’s the one thing in the communication plan that I felt would be a really good fit for us as a district,” she said.
The board discussed Sun Prairie’s plan in practice and asked Zerr to follow up with their officials on details, but were hesitant to seriously move ahead with anything without more evidence of this plan working. The board is going to work on developing lists of what groups could be part of a potential key communicator group.
The board took no action on the plan at the meeting.
In other district news:
• The board approved the annual curriculum standards. The board adopted the Wisconsin Model Academic Standards for all content areas.
• The board approved the 2022-23 district-wide meal prices. Student meal prices will not change from the previous school year.
• The board approved the district’s property insurance renewal. The total cost for the year is $1,015,084.
The tech industry’s latest artificial intelligence constructs can be pretty convincing if you ask them what it feels like to be a sentient computer, or maybe just a dinosaur or squirrel. But they’re not so good — and sometimes dangerously bad — at handling other seemingly straightforward tasks.
Take, for instance, GPT-3, a Microsoft-controlled system that can generate paragraphs of human-like text based on what it’s learned from a vast database of digital books and online writings. It’s considered one of the most advanced of a new generation of AI algorithms that can converse, generate readable text on demand and even produce novel images and video.
Among other things, GPT-3 can write up most any text you ask for — a cover letter for a zookeeping job, say, or a Shakespearean-style sonnet set on Mars. But when Pomona College professor Gary Smith asked it a simple but nonsensical question about walking upstairs, GPT-3 muffed it.
“Yes, it is safe to walk upstairs on your hands if you wash them first,” the AI replied.
These powerful and power-chugging AI systems, technically known as “large language models” because they’ve been trained on a huge body of text and other media, are already getting baked into customer service chatbots, Google searches and “auto-complete” email features that finish your sentences for you. But most of the tech companies that built them have been secretive about their inner workings, making it hard for outsiders to understand the flaws that can make them a source of misinformation, racism and other harms.
“They’re very good at writing text with the proficiency of human beings,” said Teven Le Scao, a research engineer at the AI startup Hugging Face. “Something they’re not very good at is being factual. It looks very coherent. It’s almost true. But it’s often wrong.”
That’s one reason a coalition of AI researchers co-led by Le Scao —- with help from the French government — launched a new large language model Tuesday that’s supposed to serve as an antidote to closed systems such as GPT-3. The group is called BigScience and their model is BLOOM, for the BigScience Large Open-science Open-access Multilingual Language Model. Its main breakthrough is that it works across 46 languages, including Arabic, Spanish and French — unlike most systems that are focused on English or Chinese.
It’s not just Le Scao’s group aiming to open up the black box of AI language models. Big Tech company Meta, the parent of Facebook and Instagram, is also calling for a more open approach as it tries to catch up to the systems built by Google and OpenAI, the company that runs GPT-3.
“We’ve seen announcement after announcement after announcement of people doing this kind of work, but with very little transparency, very little ability for people to really look under the hood and peek into how these models work,” said Joelle Pineau, managing director of Meta AI.
Competitive pressure to build the most eloquent or informative system — and profit from its applications — is one of the reasons that most tech companies keep a tight lid on them and don’t collaborate on community norms, said Percy Liang, an associate computer science professor at Stanford who directs its Center for Research on Foundation Models.
“For some companies this is their secret sauce,” Liang said. But they are often also worried that losing control could lead to irresponsible uses. As AI systems are increasingly able to write health advice websites, high school term papers or political screeds, misinformation can proliferate and it will get harder to know what’s coming from a human or a computer.
Meta recently launched a new language model called OPT-175B that uses publicly available data — from heated commentary on Reddit forums to the archive of U.S. patent records and a trove of emails from the Enron corporate scandal. Meta says its openness about the data, code and research logbooks makes it easier for outside researchers to help identify and mitigate the bias and toxicity that it picks up by ingesting how real people write and communicate.
“It is hard to do this. We are opening ourselves for huge criticism. We know the model will say things we won’t be proud of,” Pineau said.
While most companies have set their own internal AI safeguards, Liang said what’s needed are broader community standards to guide research and decisions such as when to release a new model into the wild.
It doesn’t help that these models require so much computing power that only giant corporations and governments can afford them. BigScience, for instance, was able to train its models because it was offered access to France’s powerful Jean Zay supercomputer near Paris.
The trend for ever-bigger, ever-smarter AI language models that could be “pre-trained” on a wide body of writings took a big leap in 2018 when Google introduced a system known as BERT that uses a so-called “transformer” technique that compares words across a sentence to predict meaning and context. But what really impressed the AI world was GPT-3, released by San Francisco-based startup OpenAI in 2020 and soon after exclusively licensed by Microsoft.
GPT-3 led to a boom in creative experimentation as AI researchers with paid access used it as a sandbox to gauge its performance — though without important information about the data it was trained on.
OpenAI has broadly described its training sources in a research paper, and has also publicly reported its efforts to grapple with potential abuses of the technology. But BigScience co-leader Thomas Wolf said it doesn’t provide details about how it filters that data, or give access to the processed version to outside researchers.
“So we can’t actually examine the data that went into the GPT-3 training,” said Wolf, who is also a chief science officer at Hugging Face. “The core of this recent wave of AI tech is much more in the dataset than the models. The most important ingredient is data and OpenAI is very, very secretive about the data they use.”
Wolf said that opening up the datasets used for language models helps humans better understand their biases. A multilingual model trained in Arabic is far less likely to spit out offensive remarks or misunderstandings about Islam than one that’s only trained on English-language text in the U.S., he said.
One of the newest AI experimental models on the scene is Google’s LaMDA, which also incorporates speech and is so impressive at responding to conversational questions that one Google engineer argued it was approaching consciousness — a claim that got him suspended from his job last month.
Colorado-based researcher Janelle Shane, author of the AI Weirdness blog, has spent the past few years creatively testing these models, especially GPT-3 — often to humorous effect. But to point out the absurdity of thinking these systems are self-aware, she recently instructed it to be an advanced AI but one which is secretly a Tyrannosaurus rex or a squirrel.
“It is very exciting being a squirrel. I get to run and jump and play all day. I also get to eat a lot of food, which is great,” GPT-3 said, after Shane asked it for a transcript of an interview and posed some questions.
Shane has learned more about its strengths, such as its ease at summarizing what’s been said around the internet about a topic, and its weaknesses, including its lack of reasoning skills, the difficulty of sticking with an idea across multiple sentences and a propensity for being offensive.
“I wouldn’t want a text model dispensing medical advice or acting as a companion,” she said. “It’s good at that surface appearance of meaning if you are not reading closely. It’s like listening to a lecture as you’re falling asleep.”
EAU CLAIRE — Residents are invited to provide their input on county services and priorities through a County Citizen Engagement Survey.
The purpose of the survey is to ask residents how essential different county’s services are. Results of this survey will offer the County Board of Supervisors insight into the allocation process for the 2023 budget.
The survey begins with basic information about the respondent, including age, gender, length of residency and municipality of residency.
Residents are then requested to indicate their familiarity level and priority level of the following services: judicial services, law enforcement services, health and social services, planning, conservation and land use services, land, road and air services, general government services and education and community partnerships.
The survey will be available until September 30 and can be accessed on the county website, under the 2023 County Budget Information section.
At their meeting on Monday, the Committee on Finance and Budget determined public input sessions will be scheduled for the near future.
Finance Department Budget
The committee was also presented with the 2023 Finance Department budget by Finance Director Norb Kirk.
Kirk said one of the main issues on the horizon in terms of the budget process is the issues that arise with manual processing and trying to move away from those processes.
“We still move a lot of paper in the department, we get a lot of invoices that are paper, we have a lot that we end up shuffling around,” Kirk said.
The Finance Department is requesting $988,074, a 9% increase from the previous budget request. All budget requests will be brought to the County Board of Supervisors for approval.