Start with the AI infrastructure layer, then drill into companies.
Choose an industry first. Each industry page exposes obvious segment filters and the mapped public company universe.
Fabless
Chip designers whose products define AI compute, acceleration, and platform control points.
Question
Who designs the silicon that creates or routes AI compute?
Foundry
Manufacturing platforms that turn compute, networking, memory, and specialty silicon into wafers.
Question
Who manufactures the chips, and at what process / capacity bottleneck?
Memory
HBM, DRAM, NAND, and related memory suppliers that determine bandwidth and system capacity.
Question
Where are bandwidth, bit growth, pricing, and supply constraints moving?
Networking
Switching, NIC, interconnect, and network silicon or systems required to scale AI clusters.
Question
How do accelerators communicate inside and across clusters?
Optics
Optical modules, components, lasers, coherent systems, and CPO-related suppliers.
Question
Where does electrical networking hit optical bandwidth and power limits?
Semicap
Wafer fab, process control, test, and advanced packaging equipment used to build AI silicon.
Question
Which equipment steps unlock AI silicon capacity and packaging complexity?
Materials / Consumables
Wafers, substrates, chemicals, gases, ceramics, and other inputs consumed by fabs and packaging lines.
Question
Which recurring inputs become bottlenecks as AI silicon complexity rises?
Integration
Board, module, rack, EMS, ODM, and packaging integration suppliers that assemble AI infrastructure.
Question
Who converts components into modules, boards, racks, and deployable systems?
DC Infra & Power
Power generation, grid, electrical, cooling, and data center infrastructure tied to AI deployment.
Question
What physical infrastructure constrains token production?
AI Cloud
Hyperscalers, neoclouds, GPU clouds, and compute providers that rent or operate AI capacity.
Question
Who owns the clusters and sells access to compute?
AI Model
Model labs and AI product companies that convert compute into inference demand and token output.
Question
Who creates end demand for inference and training capacity?
Manufacturing
Deployment