You can not select more than 25 topics
Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.
4.9 KiB
4.9 KiB
| id | title | status | source_sections | related_topics |
|---|---|---|---|---|
| open-questions | Open Questions | active | Aggregated from all context files | [gb10-superchip memory-and-storage connectivity dgx-os-software ai-frameworks ai-workloads multi-unit-stacking physical-specs setup-and-config skus-and-pricing] |
Open Questions
Catalog of known unknowns, research gaps, and unresolved questions about the Dell Pro Max GB10.
Hardware
GB10 Superchip
- Q: What are the exact clock speeds for CPU and GPU dies under sustained load?
- Status: Unknown. No official boost/base clocks published.
- Would resolve: Performance prediction, thermal modeling
- Q: What is the detailed per-precision TFLOPS breakdown (FP4/FP8/FP16/FP32/FP64)?
- Status: Only FP4 (1,000 TFLOPS) is officially published. Others are inferred.
- Would resolve: Accurate workload performance estimation
- Q: What is the thermal throttling behavior?
- Status: Unknown. Sustained vs. peak performance delta not documented.
- Would resolve: Real-world performance expectations
Memory
- Q: Is the LPDDR5X soldered or socketed?
- Status: Almost certainly soldered (given LPDDR5X and form factor), but not confirmed.
- Would resolve: Upgradeability
- Q: What is the memory channel configuration?
- Status: Unknown. Number of channels not published.
- Would resolve: Memory performance modeling
Storage
- Q: Is the M.2 SSD user-replaceable?
- Status: Unknown. Owner's manual may clarify.
- Would resolve: Storage upgrade path
- Q: What are the exact sequential and random IOPS?
- Status: Unknown. Drive model not publicly identified.
- Would resolve: Storage performance expectations
Software
DGX OS
- Q: Can stock Ubuntu 24.04 ARM be installed instead of DGX OS?
- Status: Likely possible but unsupported. Not documented.
- Would resolve: OS flexibility
- Q: Full list of pre-installed NVIDIA packages and versions?
- Status: Partially known. Full manifest not published.
- Would resolve: Development environment baseline
- Q: Does DGX OS include Docker/container runtime by default?
- Status: Unknown.
- Would resolve: Container workflow setup
- Q: OTA update mechanism and cadence?
- Status: Unknown.
- Would resolve: Maintenance planning
AI Frameworks
- Q: TensorFlow support status on ARM GB10?
- Status: Unknown. Official vs. community builds unclear.
- Would resolve: Framework selection for TF users
- Q: Full NGC catalog availability for GB10?
- Status: Unknown. Which containers have ARM builds.
- Would resolve: Software ecosystem breadth
- Q: vLLM or other inference server support on ARM Blackwell?
- Status: Unknown.
- Would resolve: Production inference deployment options
- Q: JAX support status?
- Status: Unknown.
- Would resolve: Framework selection for JAX users
Networking / Multi-Unit
- Q: What cable/interconnect is required for multi-unit stacking?
- Status: QSFP cables, but exact type/spec not documented.
- Would resolve: Multi-unit setup purchasing
- Q: Software configuration steps for multi-unit mode?
- Status: Not documented publicly.
- Would resolve: Multi-unit deployment
- Q: Does stacking appear as a single logical device to frameworks?
- Status: Unknown. May require explicit multi-node code.
- Would resolve: Development complexity for stacked setups
- Q: Can more than 2 units be stacked?
- Status: Only 2-unit configuration documented.
- Would resolve: Maximum scaling potential
- Q: Can QSFP ports be used for general networking?
- Status: Unknown. May be reserved for stacking.
- Would resolve: Network architecture options
Physical / Environmental
- Q: Noise levels under load?
- Status: No dB measurements published.
- Would resolve: Office/desk suitability
- Q: Operating temperature range?
- Status: Unknown.
- Would resolve: Deployment environment requirements
- Q: VESA mount compatibility?
- Status: Unknown.
- Would resolve: Mounting options
- Q: Cooling solution details (fan count, heatsink type)?
- Status: Unknown.
- Would resolve: Thermal management understanding
Performance Benchmarks
- Q: Actual tokens/sec for common LLMs (Llama 3.3 70B, Mixtral, etc.)?
- Status: No published benchmarks from Dell or independent reviewers yet.
- Would resolve: Real-world inference performance expectations
- Q: Fine-tuning time estimates for common model sizes?
- Status: Unknown.
- Would resolve: Training workflow planning
- Q: Stable Diffusion / image generation performance?
- Status: Unknown.
- Would resolve: Non-LLM AI workload suitability
Resolved Questions
(Move questions here as they get answered, with date and resolution)
| Date | Question | Resolution | Source |
|---|---|---|---|
| — | — | — | — |