TensorFlow
v1.0.0Avoid common TensorFlow mistakes — tf.function retracing, GPU memory, data pipeline bottlenecks, and gradient traps.
Security Scan
OpenClaw
Benign
high confidencePurpose & Capability
The name/description match the SKILL.md content. The only required binary is python3, which is reasonable for TensorFlow guidance; there are no unrelated env vars, binaries, or config paths requested.
Instruction Scope
SKILL.md contains best-practice notes and code snippets (tf.function, GPU memory settings, tf.data, gradients, saving). It does not instruct reading user files, secrets, or sending data to external endpoints. It mentions CUDA_VISIBLE_DEVICES as a common testing env var but does not attempt to read or exfiltrate secrets.
Install Mechanism
No install spec or code files are present; this is instruction-only, so nothing is downloaded or written to disk by the skill itself (lowest install risk).
Credentials
The skill declares no required environment variables or credentials. It references a common runtime env var (CUDA_VISIBLE_DEVICES) in examples but does not require or request any secrets or unrelated credentials.
Persistence & Privilege
always is false and the skill is user-invocable. It does not request persistent presence or system-wide configuration changes. disable-model-invocation is false (normal platform default).
Assessment
This skill is instruction-only (no downloads or code) and provides TensorFlow debugging tips — low risk. Note source/homepage are not provided, so you can't verify the author; that matters more for code that would be installed. Before following commands: ensure you run them in a controlled environment where you have permission (e.g., your development VM or container), verify TensorFlow version compatibility, and cross-check the snippets with official TensorFlow docs. Do not provide any credentials to this skill (none are requested). If you want even lower risk, manually copy the relevant advice rather than allowing an agent to act autonomously on your environment.Like a lobster shell, security has layers — review code before you run it.
Runtime requirements
🧠 Clawdis
OSLinux · macOS · Windows
Binspython3
latest
tf.function Retracing
- New input shape/dtype causes retrace — expensive, prints warning
- Use
input_signaturefor fixed shapes —@tf.function(input_signature=[tf.TensorSpec(...)]) - Python values retrace — pass as tensors, not Python ints/floats
- Avoid Python side effects in tf.function — only runs once during tracing
GPU Memory
- TensorFlow grabs all GPU memory by default — set
memory_growth=Truebefore any ops tf.config.experimental.set_memory_growth(gpu, True)— must be called before GPU init- OOM with large models — reduce batch size or use gradient checkpointing
CUDA_VISIBLE_DEVICES=""to force CPU — for testing without GPU
Data Pipeline
tf.data.Datasetwithout.prefetch()— CPU/GPU idle time between batches.cache()after expensive ops — but before random augmentation.batch()before.map()for vectorized ops — faster than per-elementnum_parallel_calls=tf.data.AUTOTUNE— parallel preprocessing- Dataset iteration in eager mode is slow — use in tf.function or model.fit
Shape Issues
- First dimension is batch —
Nonefor variable batch size in Input layer model.build(input_shape)if not using Input layer — or first call errors- Reshape errors unclear —
tf.debugging.assert_shapes()for debugging - Broadcasting silently succeeds — may hide shape bugs
Gradient Tape
- Variables watched by default — tensors need
tape.watch(tensor) persistent=Truefor multiple gradients — otherwise tape consumed after first usetape.gradientreturns None if no path — check for disconnected graph@tf.custom_gradientfor custom backward — not all ops have gradients
Training Gotchas
model.trainable = Falseafter compile does nothing — set before compile- BatchNorm behaves differently in training vs inference —
training=True/Falsematters model.fitshuffles by default —shuffle=Falsefor time seriesvalidation_splittakes from end — shuffle data first if order matters
Saving Models
model.save()saves everything — architecture, weights, optimizer statemodel.save_weights()only weights — need model code to restore- SavedModel format for serving —
tf.saved_model.save(model, path) - H5 format limited — doesn't save custom objects well, use SavedModel
Common Mistakes
- Mixing Keras and raw tf ops incorrectly — use
layers.Lambdato wrap tf ops in Sequential tf.printvs Python print — Python print only runs at trace time in tf.function- NumPy ops in graph — use tf ops, numpy executes eagerly only
- Loss returns scalar per sample — Keras averages, custom loops may need
tf.reduce_mean
Comments
Loading comments...
