D

Deep Research Archives

  • new
  • |
  • threads
  • |
  • comments
  • |
  • show
  • |
  • ask
  • |
  • jobs
  • |
  • submit

Popular Stories

  • 공학적 반론: 현대 한국 운전자를 위한 15,000km 엔진오일 교환주기 해부2 points
  • Ray Kurzweil Influence, Predictive Accuracy, and Future Visions for Humanity2 points
  • 인지적 주권: 점술 심리 해체와 정신적 방어 체계 구축2 points
  • 성장기 시력 발달에 대한 종합 보고서: 근시의 원인과 빛 노출의 결정적 역할 분석2 points
  • The Scientific Basis of Diverse Sexual Orientations A Comprehensive Review2 points
  • New
  • |
  • Threads
  • |
  • Comments
  • |
  • Show
  • |
  • Ask
  • |
  • Jobs
  • |
  • Submit
  • |
  • Contact
Search…
threads
submit
login
  1. Home/
  2. Stories/
  3. The Physics Shortcut: Algorithmic Innovations Enabling Hubbard Model Simulations on Consumer Hardware and the Democratization of Quantum Material Research
▲

The Physics Shortcut: Algorithmic Innovations Enabling Hubbard Model Simulations on Consumer Hardware and the Democratization of Quantum Material Research

0 point by adroot1 1 month ago | flag | hide | 0 comments

Research Report: The Physics Shortcut: Algorithmic Innovations Enabling Hubbard Model Simulations on Consumer Hardware and the Democratization of Quantum Material Research

Executive Summary

This report synthesizes extensive research into the newly developed computational strategies, collectively termed the 'physics shortcut,' that enable the simulation of the quantum Hubbard model on consumer-grade hardware. The Hubbard model, fundamental to understanding strongly correlated quantum materials and phenomena like high-temperature superconductivity, has historically been computationally intractable for all but the smallest systems, confining research to elite supercomputing centers. Our findings indicate that the 'physics shortcut' is not a single monolithic algorithm but a multifaceted paradigm shift encompassing a suite of advanced, complementary techniques that attack this complexity from different angles.

The primary optimization strategies identified are:

  1. Semiclassical Approximation via the Fermionic Truncated Wigner Approximation (fTWA): This method fundamentally reduces computational complexity by mapping the exponential-scaling quantum problem onto a classical-like phase-space governed by stochastic differential equations. This changes the problem's scaling from exponential to a manageable polynomial (quadratic) with system size, representing the core of the "shortcut."
  2. GPU Acceleration of Established Solvers: Significant performance gains, ranging from 30x to over 350x, are achieved by porting established numerical methods like Hybrid Monte Carlo (HMC), Exact Diagonalization (ED), and Quantum Monte Carlo (QMC) to leverage the parallel processing architecture of modern consumer Graphics Processing Units (GPUs).
  3. Physics-Informed Compression with Tensor Networks: Methods such as the Density Matrix Renormalization Group (DMRG) for one-dimensional systems and Projected Entangled Pair States (PEPS) for two-dimensional systems provide an efficient, compressed representation of the quantum wavefunction. By exploiting the physical principle that relevant states have limited entanglement, these techniques avoid storing the full, exponentially large state vector. Crucially, PEPS-based approaches circumvent the notorious "fermion sign problem" that plagues many other simulation methods.
  4. Learned Representations with Neural Network Quantum States (NNQS): A cutting-edge approach uses advanced machine learning architectures, such as transformers, as a variational ansatz to represent the quantum wavefunction. This data-driven method has achieved unprecedented accuracy in calculating the ground-state properties of the challenging doped 2D Hubbard model.

These core strategies are further enhanced by a toolbox of supporting optimizations, including the explicit enforcement of physical symmetries, numerical quantization to reduce memory footprints, and the use of machine learning-based surrogate models to accelerate parameter space exploration.

The implications of this technological shift are profound, catalyzing a widespread democratization of quantum material research. By migrating a significant class of simulations from supercomputers to laptops, this paradigm shift lowers the barrier to entry for researchers at smaller institutions and in developing nations, accelerates the pace of discovery through rapid local iteration, transforms physics education by enabling hands-on computational experiments, and optimizes the use of global high-performance computing (HPC) resources by freeing them for truly intractable problems. This report provides a detailed analysis of these technical mechanisms and their transformative impact on the scientific ecosystem.

1. Introduction

The Hubbard model is a cornerstone of condensed matter physics, offering a simplified yet powerful description of interacting electrons on a crystal lattice. Its solutions are believed to hold the key to understanding some of the most profound and technologically promising phenomena in materials science, including high-temperature superconductivity, quantum magnetism, and exotic phases of matter. Despite its apparent simplicity, solving the Hubbard model is a canonical "grand challenge" in computational physics. The quantum state of the system resides in a Hilbert space whose dimensions grow exponentially with the number of particles—a barrier known as the "curse of dimensionality." Furthermore, many powerful simulation methods are crippled by the "fermion sign problem," a numerical instability that renders calculations at low temperatures or with specific particle numbers prohibitively expensive.

Historically, these challenges have created a research landscape where meaningful progress required access to the world's most powerful and expensive supercomputers. This dependency has inherently limited the scope and pace of research, concentrating cutting-edge computational capability within a small number of well-funded national laboratories and elite universities.

This report investigates a recent and dramatic shift in this paradigm. A confluence of algorithmic innovations, collectively referred to as a 'physics shortcut,' is now making it possible to perform high-fidelity simulations of the Hubbard model on consumer-grade hardware, such as standard laptops and desktops equipped with modern GPUs. This research report synthesizes findings from multiple investigative phases to provide a comprehensive answer to the central research query: How does the newly developed 'physics shortcut' algorithm specifically optimize the Hubbard model calculations to allow consumer-grade hardware to simulate quantum many-body systems, and what are the implications for democratizing quantum material research beyond supercomputing centers?

The following sections will deconstruct the 'physics shortcut,' revealing it to be not a single method but a diverse ecosystem of computational strategies. We will analyze the core technical mechanisms of each approach—from semiclassical approximations and tensor network representations to machine learning ansätze and GPU-centric optimizations. Subsequently, we will explore the profound and cascading implications of these technologies, examining how they are reshaping the landscape of scientific discovery, education, and global collaboration in the quest to understand and engineer the quantum world.

2. Key Findings

The comprehensive research conducted reveals that the 'physics shortcut' is a confluence of multiple distinct but synergistic computational strategies. These strategies collectively circumvent the traditional exponential scaling barriers of quantum many-body problems, enabling their simulation on accessible, consumer-grade hardware. The primary findings are organized below by thematic area.

2.1 Finding: The 'Physics Shortcut' is a Multifaceted Computational Strategy

The term 'physics shortcut' does not refer to a single algorithm but rather to a diverse set of advanced computational paradigms that reduce the complexity of Hubbard model simulations. The research identified four principal pillars of this strategy:

  1. Semiclassical Approximations: Methods that map the quantum problem onto a more tractable, classical-like statistical problem.
  2. GPU Acceleration: The porting and optimization of established numerical solvers to exploit the massive parallelism of consumer GPUs.
  3. Compressed Representations (Tensor Networks): Physics-informed techniques that efficiently represent the quantum state by discarding redundant information.
  4. Learned Representations (Machine Learning): Data-driven methods that use neural networks to approximate the complex quantum wavefunction.

2.2 Finding: Semiclassical Approximations Fundamentally Alter Computational Scaling

The most prominent 'shortcut' is a modernized application of the Fermionic Truncated Wigner Approximation (fTWA).

  • Core Mechanism: fTWA maps the quantum dynamics of interacting fermions onto an ensemble of classical trajectories in a specially constructed phase-space. The evolution is governed by a set of Stochastic Differential Equations (SDEs).
  • Scaling Breakthrough: This mapping changes the computational scaling of the problem from exponential with system size (as in exact methods) to a far more manageable polynomial (quadratic) scaling. This is the single most important optimization that enables simulations on laptops.
  • Realistic Physics: The updated fTWA framework can model dissipative spin dynamics, allowing for the simulation of more realistic open quantum systems that interact with their environment, a crucial feature for comparing with real-world experiments.
  • Accessibility: The method is framed as a "user-friendly conversion table," significantly lowering the implementation barrier and allowing physicists to become proficient within days.

2.3 Finding: GPU Acceleration Delivers Massive Speedups for Established Methods

A highly practical and immediate shortcut involves leveraging consumer GPU hardware to accelerate existing, well-understood algorithms.

  • Performance Gains: Documented speedups are substantial, ranging from 30-350 times for Hybrid Monte Carlo (HMC) simulations and over 100 times for Exact Diagonalization (ED) calculations compared to traditional CPU-based execution.
  • Broad Applicability: This approach benefits a wide range of methods, including HMC, ED, and Determinant Quantum Monte Carlo (DQMC), by offloading the most computationally intensive linear algebra operations (e.g., matrix multiplications, inversions) to the GPU's parallel cores.
  • Hybrid Approaches: New strategies combine semiclassical approximations with machine learning optimizers (e.g., SCA + ADAM optimizer) using frameworks like PyTorch and CUDA, which are natively designed for GPU execution.

2.4 Finding: Tensor Networks Provide a Physics-Informed Compression of the Quantum State

Tensor network methods offer a powerful way to manage the 'curse of dimensionality' by exploiting the physical structure of quantum entanglement.

  • Efficient Representation: For 1D systems, the Density Matrix Renormalization Group (DMRG) using a Matrix Product State (MPS) is exceptionally effective. For 2D systems, Projected Entangled Pair States (PEPS) provide a natural and powerful generalization.
  • Intrinsic Compression: The core efficiency mechanism is bond dimension truncation via Singular Value Decomposition (SVD). This systematically discards the least entangled (and thus least physically relevant) parts of the quantum state, dramatically reducing memory and computational requirements.
  • Overcoming the Sign Problem: Crucially, the fermionic PEPS (fPEPS) formalism is inherently immune to the fermion sign problem, a major obstacle that cripples QMC methods in many important regimes (e.g., low temperature, finite doping).

2.5 Finding: Neural Network Quantum States (NNQS) Emerge as a State-of-the-Art Variational Solver

A new frontier in quantum simulation involves using deep learning to represent the many-body wavefunction.

  • Unprecedented Accuracy: NNQS, particularly those employing advanced architectures like transformers, have demonstrated the ability to capture the complex, long-range correlations in the Hubbard model, achieving state-of-the-art accuracy for the ground state of the challenging 2D doped system.
  • Variational Optimization: The neural network's parameters are optimized using the Variational Monte Carlo (VMC) framework, where the network itself acts as a highly expressive variational ansatz for the true wavefunction.
  • Flexibility: Unlike the fixed structure of tensor networks, neural networks offer a more flexible and potentially more powerful function approximator, capable of learning complex correlation patterns directly from the data generated during optimization.

2.6 Finding: A Toolbox of Cross-Cutting Optimizations Further Enhances Performance

Beyond the four main pillars, a range of supporting techniques are critical for making simulations practical on consumer hardware.

  • Symmetry Exploitation: Explicitly enforcing known physical symmetries of the Hubbard model (e.g., U(1) particle number conservation, SU(2) spin symmetry) within the tensor network or other data structures leads to dramatic reductions in memory and computational cost.
  • Data and Memory Management: Techniques like using sparse matrix formats (e.g., ELL format) and quantization (reducing numerical precision from 32-bit to 8-bit integers) drastically lower the memory footprint, allowing larger systems to fit into the limited VRAM of consumer GPUs.
  • Workflow Acceleration: Computationally inexpensive surrogate models, trained on a small number of high-fidelity simulations, can rapidly predict material properties across a vast parameter space, accelerating the initial exploration phase of research.
  • Specific Algorithm Speedups: Targeted optimizations, such as a novel algorithm for generating thermal pure states, have yielded concrete performance gains of approximately 3.5 times for finite-temperature Hubbard model calculations.

2.7 Finding: The Overarching Implication is the Profound Democratization of Quantum Research

The collective impact of these technical innovations is a fundamental shift in the sociology and accessibility of computational quantum materials science.

  • Lowered Barrier to Entry: The primary dependency on centralized, expensive supercomputing centers is broken, empowering a global community of researchers, educators, and students at institutions of all sizes.
  • Accelerated Scientific Discovery: The ability to run simulations rapidly on local machines tightens the feedback loop between theory and experiment, allowing for faster iteration, broader exploration of ideas, and potentially a higher rate of discovery.
  • Transformation of Education: These tools enable a paradigm shift in physics education from purely theoretical instruction to hands-on, interactive computational learning, helping to address the global quantum talent shortage.
  • Strategic Optimization of Resources: By offloading a large class of simulations to the vast, distributed network of consumer computers, these methods free up scarce HPC resources to be focused on the most complex problems that remain classically intractable.

3. Detailed Analysis

This section provides a deeper technical examination of the key computational strategies identified, detailing their underlying mechanisms and connecting them directly to the capability of running on consumer-grade hardware.

3.1 The Semiclassical Revolution: Fermionic Truncated Wigner Approximation (fTWA)

The fTWA method stands out as a true 'shortcut' because it fundamentally reformulates the quantum problem into a more computationally tractable form.

Mechanism of Action: The core idea is to move from the exponentially large Hilbert space to a classical-like phase-space. The quantum state of the system is not represented by a state vector but by a probability distribution (the Wigner function) over this phase-space. The quantum evolution, described by the Schrödinger equation, is mapped onto a set of classical-like equations of motion for trajectories within this space. For interacting fermionic systems, these take the form of Stochastic Differential Equations (SDEs).

The Scaling Breakthrough: This reformulation is the key to accessibility. An exact quantum simulation must manipulate matrices whose size scales as 2^N where N is the number of sites. The fTWA method avoids this entirely. Instead, it simulates an ensemble of M classical trajectories. The computational cost for each trajectory scales polynomially with the system size, specifically as N^2. The total cost is therefore M * N^2. While M can be large to achieve good statistics, the scaling with system size N is no longer exponential. This quadratic scaling is what brings simulations of moderately large systems within the reach of a standard laptop CPU.

Beyond Mean-Field Theory: Crucially, fTWA is not a purely classical approximation. It is a semiclassical method that systematically incorporates leading-order quantum fluctuations around the classical (mean-field) trajectories. This allows it to capture essential quantum phenomena like tunneling and interference, providing a level of physical accuracy far beyond simpler theories. Furthermore, the ability of the updated method to handle dissipative dynamics means it can model the realistic interaction of a quantum system with its environment, a critical feature for describing real materials that are never perfectly isolated.

3.2 Brute Force Refined: GPU Acceleration of Classical Solvers

This strategy does not change the fundamental nature of the algorithms but rather exploits a massive shift in consumer hardware architecture to accelerate them dramatically.

Mechanism of Action: Numerical methods like Exact Diagonalization, Hybrid Monte Carlo, and Determinant Quantum Monte Carlo rely heavily on a small set of core linear algebra operations: matrix-vector multiplication, matrix-matrix multiplication, and matrix inversion. A modern CPU executes these tasks sequentially or with a few parallel cores. A consumer GPU, in contrast, contains thousands of smaller, simpler cores designed to perform such operations in parallel. By rewriting the code using frameworks like NVIDIA's CUDA or by linking to high-performance libraries like Intel oneMKL, these computationally intensive kernels can be offloaded to the GPU.

Quantifiable Impact: The performance gains are direct and measurable. For ED, where the goal is to find the lowest eigenvalues of the massive Hamiltonian matrix using iterative methods like the Lanczos algorithm, speedups of over 100x have been reported for 2D systems. For HMC, a Monte Carlo method, speedups range from 30x to 350x. This means a simulation that would take ten days on a CPU could potentially be completed in under an hour on a consumer GPU, transforming the research workflow from a multi-week project to an afternoon's task. This acceleration is what allows researchers to tackle larger system sizes or collect much better statistics than would be feasible on a CPU alone.

3.3 Taming the Wavefunction: The Power of Tensor Networks

Tensor networks are a direct attack on the 'curse of dimensionality' based on a deep physical insight about the nature of entanglement in quantum systems.

Mechanism of Action: Instead of storing the 2^N coefficients of the wavefunction, a tensor network represents it as a network of interconnected, much smaller tensors. For a 1D system, a Matrix Product State (MPS) represents the state as a "chain" of tensors, which can be efficiently optimized using the Density Matrix Renormalization Group (DMRG) algorithm. The key insight is that the ground states of local Hamiltonians are not randomly distributed throughout Hilbert space; they obey an "area law" of entanglement, meaning they have a relatively simple entanglement structure that can be captured efficiently by this representation.

The compression comes from the bond dimension (D), which controls the size of the matrices connecting the tensors. The SVD algorithm is used to truncate this dimension, effectively throwing away the quantum states with the least contribution to the overall entanglement. This provides a controllable trade-off: a larger D gives higher accuracy at a higher computational cost. The number of parameters scales polynomially with N and D, avoiding the exponential catastrophe.

Solving the 2D Challenge: For 2D systems, Projected Entangled Pair States (PEPS) extend this idea to a 2D grid of tensors. While computationally more demanding than DMRG, fermionic PEPS offer a landmark advantage: they are constructed in a way that is immune to the fermion sign problem. This allows them to explore the challenging low-temperature, doped regimes of the 2D Hubbard model that are inaccessible to many QMC methods, opening a critical window into the physics of high-temperature superconductivity.

3.4 Learning the Quantum State: Neural Networks as a Variational Ansatz

This approach leverages the extraordinary power of modern deep learning to find highly accurate approximations to the quantum wavefunction.

Mechanism of Action: The central idea is to use a neural network, Ψ(θ), parameterized by weights and biases θ, as the trial wavefunction. The network takes a configuration of electrons as input and outputs the corresponding complex amplitude of the wavefunction. The goal is to find the optimal parameters θ that minimize the system's energy, E = <Ψ|H|Ψ> / <Ψ|Ψ>. This is achieved through the Variational Monte Carlo (VMC) framework, an iterative process where one:

  1. Samples configurations of the system based on the current probability distribution |Ψ(θ)|^2.
  2. Calculates the energy and its gradient with respect to the parameters θ.
  3. Updates θ using a gradient-based optimizer (like ADAM, borrowed from the machine learning field).

The success of this method hinges on the expressive power of the neural network. Modern architectures like transformers, with their self-attention mechanism, have proven exceptionally adept at capturing the complex, long-range, and multi-scale correlations present in strongly correlated systems like the doped Hubbard model, leading to ground-state energy calculations of unprecedented accuracy.

4. Discussion: Synthesis of Findings and Broader Implications

The emergence of the 'physics shortcut' represents more than an incremental improvement in computational power; it signifies a structural transformation in the methodology and sociology of quantum materials research. This section synthesizes the analyzed findings to discuss the spectrum of available shortcuts and the profound implications of their widespread adoption.

4.1 A Spectrum of Shortcuts: Choosing the Right Tool for the Problem

The various techniques do not render each other obsolete but rather form a complementary toolkit. The choice of method depends on the specific problem's dimensionality, desired accuracy, and the nature of the physics being investigated.

MethodCore PrinciplePrimary StrengthKey LimitationIdeal Use Case
fTWASemiclassical Approx.Extremely fast (polynomial scaling); models dynamics and dissipation.Approximate; accuracy can degrade for highly entangled systems or long times.Simulating quantum dynamics, thermalization, and open systems on laptops.
GPU-QMCHardware AccelerationMassive speedup of established, often exact-in-principle methods.Does not solve intrinsic issues like the fermion sign problem.Large-scale parameter sweeps for problems where QMC is known to work well.
DMRG (1D)Compressed Rep.Numerically exact for 1D gapped systems; extremely high accuracy.Performance degrades rapidly for 2D systems (the "curse of area").High-precision studies of 1D chains, ladders, and quasi-1D materials.
PEPS (2D)Compressed Rep.Avoids the sign problem; native to 2D systems.Computationally expensive; contraction of the network is itself a hard problem.Ground-state properties of the 2D Hubbard model, especially at finite doping.
NNQSLearned Rep.State-of-the-art accuracy; highly flexible and expressive ansatz.High training cost; can be a "black box"; representing fermions is complex.Pushing the accuracy frontier for challenging ground-state problems (e.g., doped 2D Hubbard).

This diverse landscape allows researchers to select the optimal trade-off between speed, accuracy, and implementation complexity, democratizing not just access but also methodological choice.

4.2 The Socio-Technical Impact of Democratization

The most significant implication of these findings is the dismantling of the dependency on centralized supercomputing infrastructure for a vast class of quantum many-body problems. This has a cascading effect on the entire scientific ecosystem.

  • Broadening the Research Community: The ability to conduct cutting-edge research on a high-end desktop or laptop empowers a much wider and more diverse community. Researchers at smaller universities, teaching-focused institutions, and in developing countries can now actively contribute to a field previously dominated by a few major centers. This influx of new perspectives and talent can accelerate progress and uncover novel scientific directions.

  • Accelerating the Innovation Cycle: The traditional research cycle involving supercomputers is often slow, involving lengthy grant applications, waiting in job queues, and analyzing large datasets. The 'physics shortcut' paradigm allows for a "democratization of iteration." A researcher can formulate a hypothesis, run a simulation, analyze the results, and refine the idea within a single day. This agility dramatically shortens the feedback loop, allowing for rapid exploration of new theories and material parameters.

  • Revolutionizing Quantum Education: These accessible tools are poised to transform quantum physics and condensed matter education. Instead of being a purely abstract, mathematical subject, students can now gain hands-on, intuitive experience. They can run their own simulations of the Hubbard model, visually observe the formation of magnetic order, and explore phase transitions by tuning parameters on their own computers. This experiential learning is invaluable for training the next generation of quantum scientists and engineers and is a direct answer to the widely acknowledged "quantum talent gap."

  • Optimizing the Global Computing Ecosystem: These methods do not make supercomputers obsolete; they make them more valuable. By offloading a significant workload to the vast, distributed network of consumer-grade machines, they free up the world's most powerful HPC resources. These national assets can then be focused on the "grand challenge" problems that remain beyond the reach of any shortcut—such as full-scale climate modeling, large-scale cosmological simulations, or simulating quantum computers themselves. This creates a more efficient, tiered, and sustainable global research infrastructure.

5. Conclusions

The research confirms that the 'physics shortcut' is not a single discovery but a paradigm shift born from the convergence of theoretical physics, computer science, and machine learning. It represents a toolbox of sophisticated computational strategies—including the fTWA's semiclassical mapping, the brute-force acceleration of GPUs, the physics-informed compression of tensor networks, and the data-driven power of neural network quantum states—that collectively break the exponential scaling wall of the Hubbard model for a wide range of important problems.

The specific optimization mechanism is tailored to each approach: fTWA achieves it by fundamentally altering the problem's scaling from exponential to polynomial; tensor networks achieve it by intelligently compressing the quantum state based on physical principles of entanglement; and GPU acceleration achieves it by mapping computationally intensive kernels onto massively parallel hardware.

The implications of this shift extend far beyond computational physics, catalyzing a profound democratization of quantum material research. By moving the frontier of simulation from the exclusive domain of the supercomputer to the accessible realm of the laptop, these innovations are leveling the scientific playing field, accelerating the pace of discovery, and transforming the educational pipeline. This new paradigm fosters a more inclusive, agile, and globally distributed research ecosystem, better equipped to tackle the complex challenges of designing the next generation of quantum materials. As these tools mature and become even more accessible, they will serve as indispensable platforms for scientific inquiry, ultimately guiding both experimental efforts and the development of future fault-tolerant quantum computers.

References

Total unique sources: 89

IDSourceIDSourceIDSource
[1]livescience.com[2]arxiv.org[3]simonsfoundation.org
[4]mpg.de[5]nsf.gov[6]quantum-journal.org
[7]thequantuminsider.com[8]firstprinciples.org[9]mugglehead.com
[10]arxiv.org[11]niti.gov.in[12]quantumzeitgeist.com
[13]arxiv.org[14]arxiv.org[15]github.com
[16]ucdavis.edu[17]arxiv.org[18]mpg.de
[19]wikipedia.org[20]arxiv.org[21]nsf.gov
[22]arxiv.org[23]pirsa.org[24]researchgate.net
[25]quantumzeitgeist.com[26]livescience.com[27]bioengineer.org
[28]sciencedaily.com[29]buffalo.edu[30]arxiv.org
[31]youtube.com[32]livescience.com[33]firstprinciples.org
[34]niti.gov.in[35]princeton.edu[36]arxiv.org
[37]arxiv.org[38]researchgate.net[39]livescience.com
[40]quantumzeitgeist.com[41]simonsfoundation.org[42]youtube.com
[43]youtube.com[44]researchgate.net[45]grafiati.com
[46]researchgate.net[47]scitechdaily.com[48]youtube.com
[49]physicsworld.com[50]sciencecodex.com[51]arxiv.org
[52]arxiv.org[53]arxiv.org[54]arxiv.org
[55]tensornetwork.org[56]uni-muenchen.de[57]arxiv.org
[58]arxiv.org[59]researchgate.net[60]perimeterinstitute.ca
[61]medium.com[62]semanticscholar.org[63]uni-muenchen.de
[64]uni-muenchen.de[65]mdpi.com[66]arxiv.org
[67]multiversecomputing.com[68]unify.ai[69]medium.com
[70]ethz.ch[71]arxiv.org[72]themoonlight.io
[73]chatpaper.com[74]arxiv.org[75]arxiv.org
[76]reddit.com[77]hackernoon.com[78]mdpi.com
[79]mdpi.com[80]nih.gov[81]dlr.de
[82]youtube.com[83]arxiv.org[84]gopubby.com
[85]researchgate.net[86]researchgate.net[87]mlug-au.org
[88]epoch.ai[89]arxiv.org

Related Topics

Latest StoriesMore story
No comments to show