<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>2026 | Math to Power Industry</title><link>https://m2pi.ca/tag/2026/</link><atom:link href="https://m2pi.ca/tag/2026/index.xml" rel="self" type="application/rss+xml"/><description>2026</description><generator>Wowchemy (https://wowchemy.com)</generator><language>en-us</language><copyright>© 2025 Pacific Institute for the Mathematical Sciences</copyright><lastBuildDate>Tue, 24 Mar 2026 00:00:00 +0000</lastBuildDate><item><title>Awesense</title><link>https://m2pi.ca/project/2026/awesense/</link><pubDate>Tue, 24 Mar 2026 00:00:00 +0000</pubDate><guid>https://m2pi.ca/project/2026/awesense/</guid><description>&lt;p>
&lt;figure >
&lt;div class="d-flex justify-content-center">
&lt;div class="w-100" >&lt;img alt="" srcset="
/project/2026/awesense/AwesenseLogo_hucc641fe9de6a93e770723ac6578f61ba_22084_5a2990e9371c2b73c58fd2fe15f3745b.webp 400w,
/project/2026/awesense/AwesenseLogo_hucc641fe9de6a93e770723ac6578f61ba_22084_0b4baea3fdc3f5e4eaf7903b0528373e.webp 760w,
/project/2026/awesense/AwesenseLogo_hucc641fe9de6a93e770723ac6578f61ba_22084_1200x1200_fit_q90_h2_lanczos_3.webp 1200w"
src="https://m2pi.ca/project/2026/awesense/AwesenseLogo_hucc641fe9de6a93e770723ac6578f61ba_22084_5a2990e9371c2b73c58fd2fe15f3745b.webp"
width="555"
height="514"
loading="lazy" data-zoomable />&lt;/div>
&lt;/div>&lt;/figure>
At &lt;a href="https://www.awesense.com/" target="_blank" rel="noopener">Awesense&lt;/a>, we&amp;rsquo;ve been building a platform for
power grid digital twins with the goal of allowing easy access to and use of
electrical grid data in order to build a myriad of applications and use cases
for the decarbonized grid of the future, which will need to include more and
more distributed energy resources (DERs) such as rooftop solar, batteries as
well as electric vehicles (EVs) and still operate safely and efficiently.&lt;/p>
&lt;p>Awesense has built a sandbox environment populated with synthetic but realistic
data and exposing APIs on top of which such applications can be built. As such,
what we are looking for is to create a collection of prototype applications
demonstrating the power of the platform.&lt;/p>
&lt;p>&lt;em>The current challenge involves building computational techniques for
automatically detecting the presence of behind-the-meter electric vehicles and
disaggregating their consumption from the overall household (meter)
consumption.&lt;/em>&lt;/p>
&lt;h3 id="background">Background&lt;/h3>
&lt;p>Energy disaggregation, also known as appliance disaggregation is a technique
which is used to analyze and break down the energy consumption in a building or
household into individual appliance-level energy usages. The goal is to identify
and monitor the energy consumption of specific “appliances” without the need for
additional metering or sensors on each device.&lt;/p>
&lt;p>The process of energy disaggregation involves analyzing the overall power signal
from a building or household and applying advanced algorithms and machine
learning techniques to separate and attribute energy consumption to specific
sources. One of these sources can be electric vehicles (EVs), particularly ones
plugged-in directly into regular outlets. These are the focus of the proposed
project.&lt;/p>
&lt;p>The ability to perform energy disaggregation analytics holds significant
importance for utilities and electricity distribution. By gaining granular
insights into customers&amp;rsquo; energy usage at a more granular level, utilities can
develop targeted demand response programs, optimize load distribution, and
enhance grid management. Energy disaggregation analytics enables utilities to
identify peak demand periods, forecast load patterns, and make informed
decisions regarding infrastructure investments.&lt;/p>
&lt;h3 id="details">Details&lt;/h3>
&lt;p>Electrical distribution grids are composed of grid elements of various types
(e.g. power lines, transformers, switches, meters, SCADA devices, etc.)
connected to each other in a network (graph) structure. A feeder is a set of
distribution lines (often operating at medium voltage) that collectively
transport power from a substation to a multitude of downstream loads. Certain
grid elements like meters, SCADA devices, fixed or movable IoT sensors, and
Distributed Energy Resources (DERs) produce time series data such as voltage,
current, power, energy, battery state of charge, and other measurements.&lt;/p>
&lt;p>In this project, the students will need to use the Awesense SQL or REST APIs to
retrieve the necessary time series and grid structure information to determine
(and visualize) which households (meters) likely have an electric vehicle, at
what times is it plugged in and how much energy does it draw.&lt;/p>
&lt;p>Additional information about the EV disaggregation use case can be found
&lt;a href="https://www.awesense.com/ecosystem/ev-appliance-disaggregation/" target="_blank" rel="noopener">here&lt;/a>.&lt;/p>
&lt;h3 id="skillset">Skillset&lt;/h3>
&lt;p>This work involves coding some analyses and visualizations on top of the data
and APIs described above and devising an algorithm for the redistribution of
load to optimize overall capacity. It would require good data wrangling,
statistics and data visualization skills to design and then implement the best
way to transform, aggregate and visualize the data, and good
mathematical/algorithmic skills for the optimization piece. The data access APIs
are in SQL form, so SQL querying skills would also be desirable. Alternatively,
REST APIs can be made available. Beyond that, the tools and programming
languages used to create the analyses, visualizations and algorithms would be up
to the students. Typical ones we have used include BI tools like Power BI or
Tableau and notebooking applications like Jupyter or Zeppelin combined with
programming languages like Python or R.&lt;/p>
&lt;h3 id="tool-access-and-support">Tool Access and Support&lt;/h3>
&lt;p>If the participants don’t have any electrical background, Awesense will teach
enough of it to allow handling the given use case.&lt;/p>
&lt;p>In addition to the previously mentioned SQL and REST APIs, the Awesense platform
also comes with a web-based application (graphical user interface front-end)
called TGI (True Grid Intelligence) that serves as a companion visual explorer
for the data stored in the platform. The snapshot below shows a portion of the
grid available in the synthetic dataset. An EV Charger is selected (map blue
marker and highlighted row in the table) and its properties are shown in the
left sidebar, along with an electrical flow time series chart. The SQL &amp;amp;
REST APIs include functionality for retrieving all this information
programmatically.&lt;/p>
&lt;p>
&lt;figure >
&lt;div class="d-flex justify-content-center">
&lt;div class="w-100" >&lt;img src="./table.png" alt="" loading="lazy" data-zoomable />&lt;/div>
&lt;/div>&lt;/figure>
&lt;/p>
&lt;p>For the duration of the project, upon agreeing to a standard end-user licensing
agreement, participants in this PIMS project will be given access to the sandbox
environment, including TGI, the programmatic SQL and REST APIs and associated
documentation, as well as access to a GitHub repository with sample SQL, REST
and python code snippets in Jupyter notebooks, showcasing how to use the APIs.&lt;/p>
&lt;p>A successful project will consist of an algorithm and a set of visuals answering
the questions posed above for the sandbox dataset, accompanied by any BI tool
files or notebook code used to produce them; Awesense permits and encourages the
public sharing of these artifacts, as long as credit for the dataset and APIs is
given to Awesense (e.g. by including a “Powered by Awesense” phrase and an
&lt;a href="https://www.awesense.com" target="_blank" rel="noopener">Awesense website link&lt;/a>; publishing the raw data
retrieved from the sandbox is not permitted.&lt;/p>
&lt;p>&lt;em>Important note : project participants will be given individual access
credentials, and they should not share with anyone else (including not among
themselves) nor cache/save them in publicly posted files.&lt;/em>&lt;/p></description></item><item><title>Environment and Climate Change Canada</title><link>https://m2pi.ca/project/2026/eccc/</link><pubDate>Tue, 24 Mar 2026 00:00:00 +0000</pubDate><guid>https://m2pi.ca/project/2026/eccc/</guid><description>&lt;p>
&lt;figure >
&lt;div class="d-flex justify-content-center">
&lt;div class="w-100" >&lt;img alt="" srcset="
/project/2026/eccc/ECCC_hu400e17715dc2e716e2c3544e5475f9ab_37724_05500c4f253d806f8216acf2ef8fbd7c.webp 400w,
/project/2026/eccc/ECCC_hu400e17715dc2e716e2c3544e5475f9ab_37724_9fb5a777ceb839649358d10628a971ff.webp 760w,
/project/2026/eccc/ECCC_hu400e17715dc2e716e2c3544e5475f9ab_37724_1200x1200_fit_q90_h2_lanczos_3.webp 1200w"
src="https://m2pi.ca/project/2026/eccc/ECCC_hu400e17715dc2e716e2c3544e5475f9ab_37724_05500c4f253d806f8216acf2ef8fbd7c.webp"
width="760"
height="314"
loading="lazy" data-zoomable />&lt;/div>
&lt;/div>&lt;/figure>
&lt;/p>
&lt;h2 id="introduction">Introduction&lt;/h2>
&lt;p>The training of a neural network is the most cost-consuming part of scientific
machine learning. The optimizers are algorithms used to adjust model parameters
to minimize the loss function. The optimizers determine the efficiency of the
training approach and the quality of the resulting model. No one single
optimizer is fit for all problems.&lt;/p>
&lt;p>The &lt;a href="https://arxiv.org/html/2601.21151v1" target="_blank" rel="noopener">PARADIS&lt;/a> model is a data-driven model developed by &lt;a href="https://www.canada.ca/en/environment-climate-change.html" target="_blank" rel="noopener">Environment and Climate
Change Canada (ECCC)&lt;/a>
for medium-range global weather forecasting. The goal of this project is to
compare various optimization protocols during the training of the PARADIS
model.&lt;/p>
&lt;h3 id="data-set">Data set&lt;/h3>
&lt;h4 id="shallow-water-equations">Shallow Water Equations&lt;/h4>
&lt;p>(&lt;a href="https://en.wikipedia.org/wiki/Shallow_water_equations" target="_blank" rel="noopener">SWE&lt;/a>) Due to the size
of the ERA-5 global weather dataset, it is more suitable to use the data
generated by SWE for this project.&lt;/p>
&lt;h3 id="optimizers-for-comparison">Optimizers for comparison&lt;/h3>
&lt;ul>
&lt;li>&lt;a href="https://optimization.cbe.cornell.edu/index.php?title=AdamW" target="_blank" rel="noopener">AdamW&lt;/a>&lt;/li>
&lt;li>&lt;a href="https://en.wikipedia.org/wiki/Stochastic_gradient_descent" target="_blank" rel="noopener">SGD&lt;/a>&lt;/li>
&lt;li>&lt;a href="https://en.wikipedia.org/wiki/Stochastic_gradient_descent" target="_blank" rel="noopener">Muon&lt;/a>&lt;/li>
&lt;li>etc.&lt;/li>
&lt;/ul>
&lt;h3 id="metric-of-comparison">Metric of comparison&lt;/h3>
&lt;ul>
&lt;li>&lt;strong>Efficiency:&lt;/strong> the rate of convergence&lt;/li>
&lt;li>&lt;strong>Quality:&lt;/strong> The loss of the optimized point, the spectral of the optimized point.&lt;/li>
&lt;/ul>
&lt;p>&lt;em>N.B. Students are required to have access to GPU for training.&lt;/em>&lt;/p></description></item><item><title>Hummingbird Bioscience</title><link>https://m2pi.ca/project/2026/hummingbird/</link><pubDate>Tue, 24 Mar 2026 00:00:00 +0000</pubDate><guid>https://m2pi.ca/project/2026/hummingbird/</guid><description>&lt;p>
&lt;figure >
&lt;div class="d-flex justify-content-center">
&lt;div class="w-100" >&lt;img alt="" srcset="
/project/2026/hummingbird/HummingBird_hu63188a33346aef26bb8c3c5517f9c1ba_21428_3a3ca834165371f007f8600db3103ad2.webp 400w,
/project/2026/hummingbird/HummingBird_hu63188a33346aef26bb8c3c5517f9c1ba_21428_513567795b78023d07d194ae6f526436.webp 760w,
/project/2026/hummingbird/HummingBird_hu63188a33346aef26bb8c3c5517f9c1ba_21428_1200x1200_fit_q90_h2_lanczos_3.webp 1200w"
src="https://m2pi.ca/project/2026/hummingbird/HummingBird_hu63188a33346aef26bb8c3c5517f9c1ba_21428_3a3ca834165371f007f8600db3103ad2.webp"
width="760"
height="305"
loading="lazy" data-zoomable />&lt;/div>
&lt;/div>&lt;/figure>
&lt;/p>
&lt;p>Recently, the quest for mathematical superintelligence has become a focal point in artificial intelligence. For example, Robinhood’s spinoff Harmonic has achieved a valuation exceeding $1 billion with its Aristotle tool. A key reason for this excitement is that mathematical reasoning is fundamentally different from other forms of reasoning: it is, by nature, airtight. It was believed that AI systems would struggle with this domain. However, recent advances suggest otherwise.&lt;/p>
&lt;p>Modern systems are increasingly capable of translating between informal mathematical language (as written in papers) and formal representations suitable for proof assistants such as Lean, Rocq, or Agda. One striking example includes the recent overturning of a long-standing result in an extended quantum field theory, previously cited hundreds of times over more than a decade. However, these systems are demonstrating strong performance primarily on carefully selected, benchmark-style problems or carefully chosen problems. Their behavior outside of these settings remains poorly understood.&lt;/p>
&lt;p>In particular, while they can often verify closed-form results in isolation, they often struggle to correctly represent and validate the dependencies those results rely on. This creates a critical reliability gap: outputs may appear correct locally while being globally inconsistent.&lt;/p>
&lt;p>As an industry partner developing AI systems for mathematical reasoning, we are directly interested in understanding the limits of these auto-formalization tools. Without a systematic understanding of their failure modes, deploying such systems introduces substantial risk. This is made worse as organizations begin making significant financial and strategic decisions based on their outputs.&lt;/p>
&lt;h3 id="project-objective">Project Objective&lt;/h3>
&lt;p>This project will investigate the robustness of auto-formalization systems by
identifying and characterizing their failure modes. Teams will:&lt;/p>
&lt;ul>
&lt;li>Explore how current systems translate informal mathematics into formal
representations&lt;/li>
&lt;li>Identify classes of problems where these systems perform well and where they
fail&lt;/li>
&lt;li>Develop strategies—such as adversarial search or evolutionary (genetic)
methods—to generate mathematical inputs that induce failure&lt;/li>
&lt;li>Analyze and categorize failure modes, with particular attention to dependency
structure and logical consistency&lt;/li>
&lt;/ul>
&lt;h3 id="deliverables">Deliverables&lt;/h3>
&lt;ul>
&lt;li>Challenging mathematical statements on which tools struggle or fail&lt;/li>
&lt;li>A taxonomy of observed failure modes&lt;/li>
&lt;li>Quantitative or qualitative metrics for evaluating system robustness&lt;/li>
&lt;li>Bonus: Recommendations for improving reliability in auto-formalization systems&lt;/li>
&lt;/ul>
&lt;h3 id="why-this-matters">Why This Matters&lt;/h3>
&lt;p>These systems can already produce convincing formal outputs. However, without
understanding when and how they fail—particularly in handling dependencies—their
use in research, verification, and high-stakes applications remains
fundamentally limited. This project aims to make this gap more visible.&lt;/p>
&lt;h3 id="teams-may-consider-approaches-such-as">Teams may consider approaches such as:&lt;/h3>
&lt;ul>
&lt;li>Restricting to a specific domain (e.g., algebraic identities, inequalities,
combinatorics, measure theory, symplectic geometry, etc.)&lt;/li>
&lt;li>Designing perturbations of known theorems to test robustness&lt;/li>
&lt;li>Modeling the search space of candidate statements&lt;/li>
&lt;li>Using adversarial or evolutionary methods to discover failure cases&lt;/li>
&lt;/ul>
&lt;p>Further, we will assist teams in getting bootstrapped to experimentation with
both closed and open-source LLMs and auto-formalization tools, and help set up
tooling for advanced search methods such as adversarial or genetic approaches.&lt;/p></description></item><item><title>One Child Every Child</title><link>https://m2pi.ca/project/2026/onechildeverychild/</link><pubDate>Tue, 24 Mar 2026 00:00:00 +0000</pubDate><guid>https://m2pi.ca/project/2026/onechildeverychild/</guid><description>&lt;p>
&lt;figure >
&lt;div class="d-flex justify-content-center">
&lt;div class="w-100" >&lt;img alt="" srcset="
/project/2026/onechildeverychild/OCEC_huce7ae0c3cde9b86bd1a12c5bc6d654e6_172919_bc7504d4a7466406be33bd5f42561708.webp 400w,
/project/2026/onechildeverychild/OCEC_huce7ae0c3cde9b86bd1a12c5bc6d654e6_172919_bce72567446a0e517caf2ba80a609063.webp 760w,
/project/2026/onechildeverychild/OCEC_huce7ae0c3cde9b86bd1a12c5bc6d654e6_172919_1200x1200_fit_q90_h2_lanczos_3.webp 1200w"
src="https://m2pi.ca/project/2026/onechildeverychild/OCEC_huce7ae0c3cde9b86bd1a12c5bc6d654e6_172919_bc7504d4a7466406be33bd5f42561708.webp"
width="760"
height="277"
loading="lazy" data-zoomable />&lt;/div>
&lt;/div>&lt;/figure>
&lt;/p>
&lt;p>This project examines how re-identification risk varies within datasets,
particularly for individuals with unique or complex diagnostic profiles, and
identifies factors beyond average sample-level risk that contribute to
vulnerability. It aims to assess how data type, dataset structure, and existing
safeguards influence re-identification risk, with a focus on informing
responsible data sharing practices for research involving children with complex
diagnoses.&lt;/p>
&lt;h2 id="introduction">Introduction&lt;/h2>
&lt;p>Re-identification risk is disproportionately shouldered by
individuals with unique data profiles (i.e., outliers), including children with
complex diagnostic profiles. There is also limited literature on the factors
that might influence re-identification risk beyond the risk to the whole sample
on average.&lt;/p>
&lt;p>With the increase in requirements for data sharing, it is important to
understand re-identification risk within a sample, but also the
re-identification risk of individuals who are unique. Considerations should be
made that there are some risks that are not within the control of researchers
(e.g., knowing whether someone is in a study or not; a family might willingly
share this information). Thus, extra consideration should be made for those
things that can be controlled by the research team while appreciating the
importance of data sharing and utilization of research data.&lt;/p>
&lt;h2 id="goal">GOAL&lt;/h2>
&lt;p>The goal of this study is to determine factors contributing to re-identification
risk in large datasets and their impact on children with complex diagnostic
profiles.&lt;/p>
&lt;h3 id="aim-1">Aim 1&lt;/h3>
&lt;p>What kind of data is at risk for re-identification and what recommendations have
been made within the literature?&lt;/p>
&lt;p>With the utilization of publicly available data to train AI models, it is
becoming increasingly important for researchers to clearly communicate to
research participants how their data will be used and the risk associated with
data sharing. The first aim of this study will look at the existing literature
and assess the recommendations that have been made to limit re-identification
risk in research data. This aim will also address the existing literature that
has evaluated re-identification risk in specific types of data and the
differences in recommendations based on data type (i.e., MRI, genetic,
behavioural, etc.).&lt;/p>
&lt;h3 id="aim-2">Aim 2&lt;/h3>
&lt;p>How does the shape of the data influence
re-identification risk within a randomly generated sample and is this comparable
to a sample generated based on known variable relationships?&lt;/p>
&lt;h4 id="aim-21">Aim 2.1&lt;/h4>
&lt;p>What factors contribute to greater re-identification risk in a sample of
randomly generated data Assuming that re-identification risk is influenced by
several factors including&lt;/p>
&lt;ol>
&lt;li>the number of possible combinations of variables (# of possible unique
&amp;lsquo;profiles&amp;rsquo;)&lt;/li>
&lt;li>the size of the sample&lt;/li>
&lt;li>the variability of each of the variables (i.e., how likely are there to be
outliers, especially &amp;rsquo;extreme&amp;rsquo; outliers), and&lt;/li>
&lt;li>the number of shared variables which could be considered identifiable.&lt;/li>
&lt;/ol>
&lt;p>The shape of the dataset could influence re-identification risk and by
understanding the relationship between the shape of the dataset and
re-identification risk we can determine the risk level of data sharing.
Particularly for individuals who might have ‘unique’ data profiles which might
be more likely to be re-identified.&lt;/p>
&lt;h4 id="aim-22">Aim 2.2&lt;/h4>
&lt;p>Does a sample that is
based on known variable relationships (co-occurrence between neurodevelopmental
disorders) behave the same way as a fully synthetic dataset?&lt;/p>
&lt;p>One use case where re-identification risk is of particular concern in within
populations of children with unique diagnostic profiles (unique combinations of
diagnoses). Given the high rate of co-occurrence in these disorders, it is
common for a child to present with multiple diagnoses. Determine the
co-occurrence rates for several neurodevelopmental disorders (Autism, ADHD, OCD,
CD, ODD, etc.)&lt;/p></description></item><item><title>Type One Energy</title><link>https://m2pi.ca/project/2026/typeone/</link><pubDate>Tue, 24 Mar 2026 00:00:00 +0000</pubDate><guid>https://m2pi.ca/project/2026/typeone/</guid><description>&lt;p>
&lt;figure >
&lt;div class="d-flex justify-content-center">
&lt;div class="w-100" >&lt;img alt="" srcset="
/project/2026/typeone/TypeOneEnergy_hu1a48c556d6642a21a108ab36f18b3f28_42326_69e985a9cd734025eb068b4dfa626b8d.webp 400w,
/project/2026/typeone/TypeOneEnergy_hu1a48c556d6642a21a108ab36f18b3f28_42326_9860542d7df0bb987590ea05cc721252.webp 760w,
/project/2026/typeone/TypeOneEnergy_hu1a48c556d6642a21a108ab36f18b3f28_42326_1200x1200_fit_q90_h2_lanczos_3.webp 1200w"
src="https://m2pi.ca/project/2026/typeone/TypeOneEnergy_hu1a48c556d6642a21a108ab36f18b3f28_42326_69e985a9cd734025eb068b4dfa626b8d.webp"
width="760"
height="137"
loading="lazy" data-zoomable />&lt;/div>
&lt;/div>&lt;/figure>
&lt;/p>
&lt;p>The purpose of this project is to identify, through numerical optimization,
fusion reactor shapes which maximize the average pressure of the fusion fuel.&lt;/p>
&lt;h3 id="about-type-one-energy">About Type One Energy&lt;/h3>
&lt;p>At Type One Energy Group, Inc., we are developing optimized stellarator designs
to provide sustainable, affordable fusion power to the world. We apply proven
advanced manufacturing methods, modern computational physics and high-field
superconducting magnets to pursue the lowest-risk, shortest-schedule path to a
fusion power plant over the coming decade.&lt;/p>
&lt;h3 id="the-problem">The Problem&lt;/h3>
&lt;p>The goal of the project is to determine, via the application of local, global,
and machine-learning based numerical optimization algorithms, the
cross-sectional shapes of fusion reactors which maximize the average pressure of
the fusion fuel. Since the average pressure is computed from the solution of a
partial differential equation (PDE) describing force balance for the fusion fuel
inside the reactor, this is a PDE constrained shape optimization problem.&lt;/p>
&lt;p>The Type One Energy mentor will provide the complete description of the PDE to
be solved and of the toroidal geometry we will work with. They will also provide
Python examples of numerical solvers for the PDE of interest. The participants
will therefore focus their efforts on the development of optimization algorithms
for this shape optimization problem.&lt;/p>
&lt;h3 id="skillset">Skillset&lt;/h3>
&lt;p>The project requires a good command of standard numpy tools and programming
practices in Python. It also requires a good understanding of elementary partial
differential equations (e.g. the first 4 chapters of &lt;a href="https://books.google.ca/books/about/Partial_Differential_Equations.html?id=Xnu0o_EJrCQC" target="_blank" rel="noopener">Evans’ PDE
textbook&lt;/a>)
and a good understanding of elementary numerical analysis / scientific
computing. No prior knowledge of fusion or plasma physics is required. We will
be happy to teach participants as much about fusion and plasma physics as they
would like to learn!&lt;/p>
&lt;p>If we make good progress on this project, it may become valuable to become
familiar with Python-based automated frameworks for solving partial differential
equations, such as Firedrake or FEniCS.&lt;/p></description></item><item><title>IOTO</title><link>https://m2pi.ca/project/2026/ioto/</link><pubDate>Tue, 24 Mar 2026 00:00:00 +0000</pubDate><guid>https://m2pi.ca/project/2026/ioto/</guid><description>&lt;p>
&lt;figure >
&lt;div class="d-flex justify-content-center">
&lt;div class="w-100" >&lt;img alt="" srcset="
/project/2026/ioto/IOTOLogo_hu4cefab04bfde003a7127f1221317f7b7_4500_a93614931e0c5a2cab9220c1850c82f7.webp 400w,
/project/2026/ioto/IOTOLogo_hu4cefab04bfde003a7127f1221317f7b7_4500_7ad782d3cfd13a53250d631d438f2074.webp 760w,
/project/2026/ioto/IOTOLogo_hu4cefab04bfde003a7127f1221317f7b7_4500_1200x1200_fit_q90_h2_lanczos_3.webp 1200w"
src="https://m2pi.ca/project/2026/ioto/IOTOLogo_hu4cefab04bfde003a7127f1221317f7b7_4500_a93614931e0c5a2cab9220c1850c82f7.webp"
width="244"
height="114"
loading="lazy" data-zoomable />&lt;/div>
&lt;/div>&lt;/figure>
&lt;/p>
&lt;h3 id="overview">Overview&lt;/h3>
&lt;p>We have controlled vocabularies and topic structures that are used to index
and understand large bodies of text. We want to better understand how text is
clustered around known structured topics so that unknown topics can be
identified in texts and added to our controlled vocabularies and topic
structures.&lt;/p>
&lt;h3 id="background">Background&lt;/h3>
&lt;p>Goverlytics&lt;sup>®&lt;/sup> seeks to produce low-dimensional representations of
legislative activity to: 1) make politics accessible to a broader public; and to
2) increase focus on policy goals. The model for Goverlytics&lt;sup>®&lt;/sup> is
sports analytics, which has transformed the way in which sports are understood
and consumed. Goverlytics&lt;sup>®&lt;/sup> analyzes data generated during
legislative sessions: attendance, documents, transcripts, vote tallies, audio
and video recordings.&lt;/p>
&lt;p>Analytics in sports first &lt;a href="https://invention.si.edu/invention-stories/sports-analytics-moneyball" target="_blank" rel="noopener">began with measurement of what could be easily
measured&lt;/a>
– goals (of course!), strokes, hits, etc. By distilling all that goes on during
the activity into a few dimensions that allow for quantification and comparison,
analytics helps to explain and so increase comprehension and engagement.
Increasingly complex measurements are being engineered from ever larger datasets
to enhance predictions and decision-making. Both short-term outcomes and
strategies that may be decided in game, and for long-term considerations such as
player health are at stake.&lt;/p>
&lt;h3 id="challenge">Challenge&lt;/h3>
&lt;p>In some cases, Goverlytics&lt;sup>®&lt;/sup> has to start creating statistics for
legislative sessions from simple audio tracks. Audio is transcribed into words
of a language. Then the language words (and concatenations of them) are binned
into topic discourse, by means language models and &lt;a href="https://www.comparativeagendas.net/datasets_codebooks" target="_blank" rel="noopener">topic
classifications&lt;/a>.
Finally, topic classifications are used to index parts of the legislative
activity that are likely to be interesting for a broader public. This process
is akin to the distillation of a sporting match into a highlights reel or
abbreviated match summary e.g. What topics were discussed the most? Who talked
about those topics? Were there any significant new topics, or was voting and
discussion about previously known topics? Were there significant outliers? Smash
hits?&lt;/p>
&lt;p>Because legislative sessions can go on for hours with very little information of
predictive or decision-making value, it can be costly to process raw data to
reach insight. The challenge is to find shorter paths to interesting bits of
discourse. Can methods from &lt;a href="https://www.mdpi.com/journal/mathematics/special_issues/Mathematical_Methods_Signal_Analysis" target="_blank" rel="noopener">signal
analysis&lt;/a>
or related mathematical fields be used to more efficiently signpost insight into
legislative data? Unsupervised learning techniques may provide some guidance.
However, a successful solution will reveal what in the legislative activity is
deserving of attention from a policy point of view, ether by connecting with a
known policy ontology (such as &lt;a href="https://www.comparativeagendas.net/pages/master-codebook" target="_blank" rel="noopener">comparative agendas
codebook&lt;/a>, or by
surfacing issues that should be connected to a known ontology.&lt;/p>
&lt;h3 id="data">Data&lt;/h3>
&lt;p>At a minimum, APIs covering topic data for various legislative leagues (Canada,
BC, Alberta, etc.) will be made available to the M2PI team. These APIs reliably
serve data concerning legislative &amp;lsquo;players&amp;rsquo; and their topic-related interventions
over a number of legislative sessions. Corresponding audio will also be supplied.&lt;/p>
&lt;p>Further datasets concerning elections, voting, and financial data may be made
available – depending time available, which legislative leagues the M2PI team
elects to study, and how they choose to analyse.&lt;/p>
&lt;ul>
&lt;li>Finance data are available from
&lt;a href="https://data.oecd.org/gga/general-government-spending.htm" target="_blank" rel="noopener">OECD&lt;/a>, &lt;a href="https://www150.statcan.gc.ca/n1/en/type/data" target="_blank" rel="noopener">Statistics
Canada&lt;/a>, and &lt;a href="https://www2.gov.bc.ca/gov/content/data/statistics/economy/bc-economic-accounts-gdp" target="_blank" rel="noopener">legislative
&amp;rsquo;leagues&amp;rsquo;
themselves&lt;/a>.&lt;/li>
&lt;li>Topics are standardized along &lt;a href="https://www.comparativeagendas.net/pages/master-codebook" target="_blank" rel="noopener">Comparative Agendas Project (CAP)
lines&lt;/a>&lt;/li>
&lt;li>Charts of
&lt;a href="https://www.tpsgc-pwgsc.gc.ca/recgen/pceaf-gwcoa/2324/tdm-toc-eng.html" target="_blank" rel="noopener">accounts&lt;/a>
for
&lt;a href="https://www.oecd-ilibrary.org/sites/df28fbde-en/index.html?itemId=/content/component/df28fbde-en#:~:text=Governments%27%20expenditures%20by%20function%20reveal,and%20public%20order%20and%20safety" target="_blank" rel="noopener">finance&lt;/a>
overlap topic categories, but do not correspond exactly.&lt;/li>
&lt;li>Voting data for bills and motions may be available for &lt;a href="https://www.ourcommons.ca/members/en/votes" target="_blank" rel="noopener">certain
legislatures&lt;/a>.&lt;/li>
&lt;li>Audio files are available for whatever legislative level is chosen for study
by the M2PI team.&lt;/li>
&lt;/ul></description></item><item><title>Quantum Advantage Partners</title><link>https://m2pi.ca/project/2026/qap/</link><pubDate>Tue, 24 Mar 2026 00:00:00 +0000</pubDate><guid>https://m2pi.ca/project/2026/qap/</guid><description>&lt;p>
&lt;figure >
&lt;div class="d-flex justify-content-center">
&lt;div class="w-100" >&lt;img src="IOTOLogo.png" alt="" loading="lazy" data-zoomable />&lt;/div>
&lt;/div>&lt;/figure>
&lt;/p>
&lt;h3 id="overview">Overview&lt;/h3>
&lt;p>In real organizations, each role (CEO, CFO, VP Sales…) sees different
information and applies different decision criteria — yet every multi-agent
simulation framework today gives all agents the same shared context. We want to
mathematically test whether modeling information compartmentalization and
role-specific decision processes produces measurably better collective outcomes
than the standard shared-context approach.&lt;/p>
&lt;h2 id="problem-statement">Problem Statement&lt;/h2>
&lt;p>Multi-agent simulation frameworks (&lt;a href="https://crewai.com" target="_blank" rel="noopener">CrewAI&lt;/a>,
&lt;a href="https://github.com/microsoft/tinytroupe" target="_blank" rel="noopener">TinyTroupe&lt;/a>,
&lt;a href="https://github.com/google-deepmind/concordia" target="_blank" rel="noopener">Concordia&lt;/a>) model organizations by
assigning role labels to agents that all share the same information. This
ignores two fundamental features of real organizations:&lt;/p>
&lt;ol>
&lt;li>information is compartmentalized — a CFO doesn&amp;rsquo;t know everything the VP Sales
knows, and vice versa;&lt;/li>
&lt;li>each role processes information differently — applying distinct criteria,
thresholds, and filters when making decisions.&lt;/li>
&lt;/ol>
&lt;p>We propose a rule-based (no LLM, no API costs) agent-based simulation of a
simplified company with 5-7 roles operating in a stochastic market environment.&lt;/p>
&lt;p>Three configurations are tested against identical market scenarios:&lt;/p>
&lt;style type="text/css">
ol.upperalpha { list-style-type: upper-alpha; }
&lt;/style>
&lt;ol class="upperalpha">
&lt;li> &lt;strong>Baseline&lt;/strong>: shared context, simple role labels only.
&lt;li> &lt;strong>Decision process modeling&lt;/strong>: shared context, but each role applies role-specific decision functions
(different weights, thresholds, filters).
&lt;li> &lt;strong>Full compartmentalization&lt;/strong>: role-specific information subsets AND
role-specific decision functions. Each configuration is run M times across N
simulated quarters using Monte Carlo methods.
&lt;/ol>
&lt;p>Primary metric: &lt;strong>cumulative simulated revenue.&lt;/strong>&lt;/p>
&lt;p>Secondary metrics:&lt;/p>
&lt;ul>
&lt;li>&lt;strong>decision speed&lt;/strong> (cycles to consensus)&lt;/li>
&lt;li>&lt;strong>decision reversal rate&lt;/strong> (coherence proxy), and information request patterns between agents.&lt;/li>
&lt;/ul>
&lt;p>The mathematical work involves:&lt;/p>
&lt;ul>
&lt;li>formalizing agent decision functions and information partition matrices,&lt;/li>
&lt;li>designing the stochastic market environment, specifying the experimental design
(factorial or fractional factorial),&lt;/li>
&lt;li>running simulations in Python, and performing statistical hypothesis testing to compare outcome distributions
across configurations A/B/C.&lt;/li>
&lt;/ul>
&lt;p>A sensitivity analysis identifies which compartmentalization parameters have the
largest effect on outcomes, including boundary conditions where
compartmentalization may hurt rather than help performance.&lt;/p>
&lt;h3 id="expected-background">Expected background&lt;/h3>
&lt;ul>
&lt;li>probability&lt;/li>
&lt;li>stochastic processes&lt;/li>
&lt;li>statistical inference&lt;/li>
&lt;li>Python (NumPy/SciPy).&lt;/li>
&lt;/ul>
&lt;p>Game theory or mechanism design is a plus but not required. The industry mentor
(&lt;a href="https://m2pi.ca/authors/bozzorey">Mehdi Bozzo-Rey, QAP&lt;/a>) will provide will
provide the initial conceptual model of role-specific decision processes,
informed by applied work in deeptech commercialization advisory.&lt;/p>
&lt;p>The team will collaboratively formalize this model into mathematical
specifications during week 1, then implement and test it in weeks 2-3.&lt;/p></description></item><item><title>University of Victoria</title><link>https://m2pi.ca/project/2026/uvic/</link><pubDate>Tue, 24 Mar 2026 00:00:00 +0000</pubDate><guid>https://m2pi.ca/project/2026/uvic/</guid><description>&lt;h3 id="overview">Overview&lt;/h3>
&lt;p>The goal of this project is to develop an interface between quantum and classical (binary) computing systems for climate modeling.&lt;/p>
&lt;p>Climate models are large, complex computer programs made up of multiple components that represent different parts of the Earth system, such as the atmosphere, oceans, cryosphere, and vegetation. Each component is typically developed as a separate code, and many of these are further divided into sub-modules.&lt;/p>
&lt;p>For example, atmospheric models usually include a dynamics module—often called the dynamical core—and one or more physics modules. The dynamical core is based on systematic discretization methods (such as finite differences, finite volumes, or spectral methods) to solve the equations of motion. In contrast, the physics modules represent processes that are not explicitly resolved by the dynamical core. These processes occur at spatial or temporal scales smaller than the model’s grid resolution and include phenomena such as radiation, phase changes of water and associated latent heat transfer, turbulence, and convection.&lt;/p>
&lt;p>Because resolving these small-scale processes directly is computationally expensive, they are typically approximated using heuristic models based on ad hoc closure assumptions.&lt;/p>
&lt;p>Although quantum computing is advancing rapidly, it is not yet practical to implement all components of climate models on quantum systems. Moreover, existing classical codes for dynamical cores are well established and highly reliable. However, if a quantum algorithm can be developed for specific sub-grid processes that is both efficient and accurate, it could be integrated with a classical dynamical core to create a hybrid quantum–classical modeling framework.&lt;/p>
&lt;p>As a proof of concept, this project proposes to couple a simple convection model with a toy climate model developed by Khouider et al. (2010). The convection model, known as the Stochastic Multicloud Model (SMCM), is a Markov model that describes the area fractions of three cloud types.&lt;/p>
&lt;p>In &lt;a href="#ref2">Khouider et al. (2010)&lt;/a>, the SMCM is coupled with a set of ordinary differential equations (ODEs) that describe the vertical profiles of temperature and moisture, assuming horizontal homogeneity (i.e., no spatial derivatives). More recently, Ueno and Miura (2025) developed a quantum implementation of the SMCM component alone.&lt;/p>
&lt;p>This project aims to integrate the quantum SMCM code of &lt;a href="#ref1">Ueno and Miura (2025)&lt;/a> with the ODE-based system used in Khouider et al. (2010), which serves as a simplified dynamical core. This integration will act as a demonstration of a hybrid quantum–classical climate modeling approach.&lt;/p>
&lt;p>As a possible extension, the quantum SMCM code could also be applied to machine learning tasks. For example, it could be used to generate sample paths for a synthetic likelihood algorithm to calibrate the SMCM using radar data (Sevilla and Khouider, unpublished work).&lt;/p>
&lt;h2 id="references">References&lt;/h2>
&lt;ol>
&lt;li>&lt;a name="ref1">&lt;/a>Kazumasa Ueno, Hiroaki Miura, Quantum Algorithm for a Stochastic
Multicloud Model, SOLA, 2025, Vol. 21, pp. 43-50, Publication Date
2025/01/22, [Early Release] Publication Date 2024/12/11, Online ISSN
1349-6476, &lt;a href="https://doi.org/10.2151/sola.2025-006" target="_blank" rel="noopener">https://doi.org/10.2151/sola.2025-006&lt;/a>.&lt;/li>
&lt;li>&lt;a name="ref2">&lt;/a> Khouider, B., J. Biello, and A. J. Majda, 2010: &lt;a href="https://projecteuclid.org/journals/communications-in-mathematical-sciences/volume-8/issue-1/A-stochastic-multicloud-model-for-tropical-convection/cms/1266935019.full" target="_blank" rel="noopener">A stochastic multicloud
model for tropical
convection&lt;/a>. Commun. Math. Sci., 8, 187–216.&lt;/li>
&lt;/ol></description></item></channel></rss>