Une exposition immersive et itinérante

Au cœur de la campagne, une exposition itinérante dévoile les histoires de douze personnes qui témoignent de leur réalité derrière les apparences, au travers de portraits photo et vidéo. La photographe Johanna De Tessières révèle, au-delà de la souffrance, des personnalités uniques et attachantes. L’exposition a été spécialement conçue comme un dispositif itinérant et est destinée à voyager dans différents lieux à Bruxelles et en Wallonie.

Vernissage de l'Exposition

Des portraits vidéos

En complément des portraits photographiques, l’exposition propose aux visiteur·ices de découvrir les récits vidéos des différents témoins. Leurs récits parlent de solitude et d’incompréhension, mais aussi de courage et de résilience.

Équipe

Gestion de projet
Sophie De Brabandere
Valérie de Halleux
Stratégie
Antonella Lacatena
Copywriting
Valérie de Halleux
Graphisme et scénographie
Anaëlle Golfier
Photographie
Johanna de Tessières
Vidéo
Sophie De Brabandere
Rodolphe De Brabandere

Équipe

Production, écriture et réalisation
Sophie De Brabandere
Conception et écriture
Céline Cocq
Prise de son, montage et mixage
Maria Conterno
Incarnation
Lucile Poulain
Composition musicale et création sonore
RIVE
Comédien·nes
Nicolas Oliver
Sébastien Schmit
Miriam Youssef
Lou et Gary
Illustration
Nicole Van Galen
Graphisme
Anaëlle Golfier

Équipe

Gestion de projet & réalisation
Valérie de Halleux
Image
Mortimer Petre

Équipe

Gestion de projet et vidéo
Antonella Lacatena
Image
Mortimer Petre
Graphisme & Webdesign
Anaëlle Golfier
Développement web
Jérôme Hubert

Équipe

Réalisation
Sophie De Brabandere
Image
Mortimer Petre

Okay, quick confession: I get prickly when a wallet markets “multi-chain” but really just tacks on networks without the UX or safety plumbing to back it up. Seriously, it’s one thing to list 40 chains and another to let users safely move value across them. My instinct says users notice the gaps fast — failed swaps, unexpected approvals, phantom gas costs — and they leave, or worse, lose funds.

Here’s the thing. For experienced DeFi users who care about security, three features are not bells and whistles — they’re baseline: reliable transaction simulation, robust WalletConnect handling, and honest multi-chain support that respects both UX and threat models. Initially I thought “yeah, sure — all wallets do this,” but then I dug into what actually happens under load, with aggressive gas markets, and across L2 rollups. Actually, wait — let me rephrase that: a lot of wallets claim capability but cut corners on simulation fidelity, session security, or chain handling, and those corners are where trouble lives.

Transaction simulation deserves more attention. Simulation isn’t just estimating gas; it’s about replaying the exact call graph your dApp would produce, catching slippage, reverts, and subtle reentrancy or approval flows before you sign. A good simulator runs a local EVM fork or uses a tracing RPC to produce a deterministic result that mirrors mainnet conditions as closely as possible. On one hand this sounds heavy; on the other hand, skipping it means users sign blind and pay for it later. The best approach blends short, synchronous prechecks (fast and cheap) with optional deeper traces when risk is high.

Screenshot mockup of a wallet showing transaction simulation and WalletConnect session details

Practical patterns that actually reduce risk

Fast checks first: validate input parameters, nonce, estimated gas, and token balances locally. Medium checks: estimate slippage by simulating the swap path against an on-chain state snapshot. Longer checks: run a complete trace against a forked state (or a reliable trace RPC) to confirm no hidden reverts or state changes happen mid-transaction. These layered checks reduce false positives and keep latency manageable — because yes, users will abandon a flow that stalls for 15 seconds.

WalletConnect is wonderful and also a constant attack surface. Wow — the convenience of QR or deep linking is addictive. But the session model matters: session-scoped permissions should be minimal by default. Session requests should clearly list methods requested (not just generic “sign” wording). If an app asks for broad access, nudge the user to require only what’s needed. There’s a balance: power users want batch signatures and conveniences; security-conscious users want granular approvals. Wallet UX should support both, not pretend one-size-fits-all works.

Something felt off about many implementations: they keep the session alive forever unless a user manually revokes it. That’s a no. Time-limited sessions, device whitelisting, or one-click quick-revoke flows reduce attack windows. Also—oh, and by the way—displaying the dApp origin prominently, along with a clear summary of pending RPC methods, cuts down social-engineering tricks. My instinct says that transparency reduces errors, and the industry data backs that up: visible intent = fewer accidental approvals.

Multi-chain support is more than swapping RPC endpoints. It’s about canonical identities for tokens and contracts, gas estimation differences, and UX that respects per-chain idiosyncrasies. For instance, L2s often have different sequencing guarantees and fee tokens. A wallet needs a per-chain adapter layer: chain metadata, gas model, explorer links, token representation, and simulation backends. On one hand it’s engineering overhead; though actually, the payoff is huge — consistent user expectations even when the underlying L1/L2 behavior diverges.

Here’s a practical checklist wallets should implement for multi-chain safety:

WalletConnect integration and multi-chain simulation are related. When a dApp requests a transaction on a chain different from the wallet’s active chain, the wallet should either reject with a clear error or prompt a one-click, atomic network switch that includes a pre-simulated result for that chain. If the wallet merely offers to switch networks without re-simulating on the target chain, you’ve introduced subtle failure modes — and users will feel betrayed when a trade fails or overpays.

Okay, real talk: no solution is bulletproof. There are tradeoffs. Deep trace simulations are expensive and add latency. Light-weight checks can miss stateful attack vectors. Wallet UX that forces micro-decisions can overwhelm users. On one hand you need strict security defaults. On the other, forcing power users into friction is bad. The best design? Conservative defaults with expert modes that expose more control for advanced users.

If you’re evaluating wallets, watch for three signals: how they present transaction simulation results (is it actionable?), how they manage WalletConnect sessions (granular, revocable, time-limited?), and how they implement multi-chain metadata (is token identity consistent?). A wallet that nails these will save users from a large fraction of common losses — approvals gone wrong, failed swaps, and cross-chain mishaps.

Try it practically — what to test as an advanced user

Want to audit a wallet quickly? Try these steps: create a WalletConnect session with a familiar dApp and note the session permissions. Initiate a swap to a chain the wallet supports but is not currently selected and observe whether it re-simulates on the target chain. Create a token approval flow and see if the wallet shows exact spender addresses and allowance amounts, not vague “dApp wants access.” Finally, simulate a high-gas scenario and watch whether the wallet’s estimates align with on-chain outcomes — if there’s a pattern of huge undershoots, that’s a red flag.

If you want to try a wallet that focuses on developer- and security-minded UX, check this one out here. It’s not the only option, but the implementation choices they highlight — granular sessions, clear simulation outputs, and chain-aware behaviors — are worth studying.

FAQ

Q: How reliable are on-device simulations versus RPC trace services?

A: On-device (local) simulations are fast and private, but they can miss subtle differences present on the real network unless you fork state. RPC trace services are higher fidelity but depend on RPC provider quality and can introduce privacy concerns. Best practice: combine both—do a quick local precheck, and run an optional, deeper trace when the transaction is large or complex.

Q: Should WalletConnect always require explicit approval for every signature?

A: For safety, yes—by default. But experienced users often need batch approvals. Offer a tiered model: conservative default with an opt-in “power mode” that allows session-scoped batching, combined with time limits and quick-revoke.

Q: How do wallets handle token identity across chains?

A: The robust approach is mapping tokens by (chainId, contractAddress) and showing canonical names and logos from trusted metadata sources. Cross-chain tokens should be labeled clearly (e.g., “USDC (Polygon)”) and linked to on-chain metadata where possible. Ambiguity is a common source of user error—don’t let it happen.

Whoa, this topic hits hard.
I’ve been around trading desks and home setups alike, and nothing surprised me more than how much software choice shapes outcomes.
At first I thought all platforms were roughly the same, though actually that was naive—there’s a big difference between slick charts and usable, reliable execution.
My instinct said somethin’ was off the first time a backtest looked perfect but failed in live because the platform ignored slippage and real fills.
That mismatch is why I care about the details.

Seriously, the market doesn’t forgive sloppy assumptions.
Most traders obsess over edge and ignore execution quality.
On one hand you build a great strategy; on the other hand the platform eats your historic profits through hidden costs and dataset quirks.
Initially I thought that better indicators alone would make me profitable, but then realized cleaner tick data and realistic order simulation mattered more.
This is a frequent blind spot.

Okay, so check this out—real-world futures trading is noisy.
Latency matters more than people admit.
You can have a perfect model on end-of-day bars and still get blown up intraday because the platform’s API queues orders poorly under load.
When you backtest, think like a market maker who worries about partial fills and sweep orders, not like a spreadsheet jockey.
That mindset changes your approach immediately.

Hmm… I remember a morning in Chicago when my laptop froze.
I had a strategy running on a platform that looked amazing until it stalled during a volatile open.
The demo looked flawless, but live order routing was flaky and the platform’s simulated slippage was just a guess, not grounded in execution logs.
That experience taught me to prioritize platforms that let you replay real tick data and measure slippage against real fills, not just theoretical fills.
It was a hard lesson, but useful.

Here’s what bugs me about some vendor pitches.
They show glossy dashboards and call it « professional-grade. »
But professional-grade means you can reproduce trades, run walk-forward optimization, and validate risk metrics robustly with intraday tick-level backtests.
I’m biased, but the tools that focus on reproducibility save you from convincing yourself of a false edge.
Trust, but verify.

Whoa, this is where charting matters.
Good charting is readable and fast.
Great charting lets you script behavior and test rules against real market microstructure.
If you can’t automate the parts you test, you will always be manually intervening and that introduces behavioral drift which skews live performance versus backtest.
Automation reduces human friction, which matters a lot.

Seriously, data quality will make you cry.
Historical data looks clean until you try to scalpel a 1-tick strategy on a thin contract.
Suddenly you find gaps, bad timestamps, and exchange split records that make your backtest hallucinate profits.
A mature platform gives you tick-level continuity, granular session templates, and tools to stitch sessions correctly across DST changes and contract rolls.
That kind of detail is non-negotiable for serious futures work.

I was hands-on with several platforms.
Some were built for retail speed, others for institutional resilience.
One allowed me to plug in custom execution models and simulate maker-taker fees, while another only supported simple slippage multipliers.
Initially I treated both as equivalent, but the differences showed up in subtle risk exposures after many small trades.
Small leaks add up quickly.

Here’s the thing.
Backtesting without realistic fill modeling is wishful thinking.
You need to implement order types, latency emulation, and realistic slippage curves that vary by time-of-day and liquidity.
When you include those, your edge often shrinks, which is good—now you’re closer to truth and less likely to be surprised.
That’s the healthier starting point for scaling strategies.

Really, the platform ecosystem matters too.
Support, community scripts, and marketplace indicators are part of the value.
If you can access shared strategy libraries and vetted data feeds, you accelerate your research cycle.
I lean toward platforms that balance community contributions and strong vendor QA so you don’t inherit other people’s mistakes blindly.
Use other people’s work, but test it intensely.

Whoa, one more anecdote.
A trader I worked with optimized a strategy heavily on a single week of low volatility.
It performed admirably in backtest and they went live with confidence.
Then the market gapped during a macro event and the execution model couldn’t handle partial fills on stop orders, producing a chain reaction of losses.
We rebuilt the stop logic and added execution-side safeguards—hard trade-offs, but necessary.

Okay, here’s how I methodically evaluate a futures platform.
First, check the fidelity of historical tick data and whether the vendor documents data lineage.
Second, verify the execution simulation includes partial fills and slippage profiles, and whether you can replay market data at variable speeds.
Third, examine latency characteristics and the API for order throttling and error handling.
Finally, make sure your platform integrates with realistic brokerage routing so live behavior matches test assumptions.

Trader screen with backtesting and execution metrics displayed, showing slippage histograms and tick replay

Where to Start — Practical Steps and a Recommendation

Start with a simple checklist and iterate.
Run a walk-forward validation, not a single-optimization backtest.
Test across multiple market regimes and across contract rolls to capture structural changes.
Also, try platforms that let you inspect raw fills and match them to market data, and one I’ve found useful in setting up robust backtests is ninjatrader because it supports tick replay, custom indicators, and varied execution models—though no platform is a silver bullet.
Use it as a tool, and make sure you stress-test your assumptions.

My advice is practical and slightly opinionated.
Don’t chase shiny UI bells.
Chase fidelity, reproducibility, and the ability to automate edge validation.
On one hand you want features; on the other hand you need trust in the numbers when you scale live.
Balance those two and you’ll be better off.

Hmm, here’s a small checklist you can run tomorrow.
1) Pull tick data and replay the worst trading day of the year.
2) Simulate orders with variable latency.
3) Record partial fill rates and slippage histograms.
4) Do a quick walk-forward test.
5) Compare the walk-forward equity curve to your naive backtest.
Those steps expose many false edges fast.

I’ll be honest, backtesting is less glamorous than people hope.
It involves cleaning data, coding gritty execution logic, and suffering through ugly error traces.
But take the time now and your live months will be calmer and more predictable.
You’ll avoid surprises that cost capital and morale.
That’s the real payoff.

FAQs

How important is tick-level data for futures strategies?

Very important for short-term and scalping strategies.
If your holding period is minutes or less, minute bars wash out microstructure.
Tick-level replay lets you see true spread behavior and measure slippage realistically.
Longer-term strategies can often get by with aggregated bars, but confirm with spot checks against tick data.

Can backtesting ever match live trading exactly?

No, never exactly.
There will always be variance from fills, latency, and changing market participants.
But you can narrow the gap with realistic execution models, quality data, and conservative assumptions.
Aim to reduce surprises, not to eliminate them entirely.

What’s the quickest way to validate a new platform?

Run a few walk-forward tests and a stress replay of a volatile session.
Measure slippage, partial fill rates, and how orders are routed or rejected.
If the platform gives you reproducible logs to audit trades, that’s a huge plus.
If not, be cautious.