Skip to content

Conversation

@penelopeysm
Copy link
Member

@penelopeysm penelopeysm commented Nov 20, 2025

Closes #1086

This adds a new type parameter to the model struct, which indicates whether threadsafe evaluation is required or not.

The default is false, so users must specifically opt-in. It's easy to make this opt-out instead, by changing the default to true. But that would probably tank performance on single-threaded sessions.

If a user declares a model with Threads.@threads or just @threads inside, it will issue a warning. This is not a foolproof detection method. But it will probably catch 95% of cases.

I believe this is the best technical solution. #1128 provides a global on/off switch, but that is quite slow. Furthermore, it's not technically correct either; threadsafe evaluation is a property of a model, not of a Julia session.

Here is a demo (launch Julia with > 1 thread, or else you won't observe this):

julia> using DynamicPPL, Distributions

julia> @model function f(x)
           a ~ Normal()
           Threads.@threads for i in eachindex(x)
               x[i] ~ Normal(a)
           end
       end
┌ Warning: It looks like you are using `Threads.@threads` in your model definition.
│
│ Note that since version 0.39 of DynamicPPL, threadsafe evaluation of models is disabled by default. If you need it, you will need to explicitly enable it by creating the model, and then running `model = setthreadsafe(model, true)`.
│
│ Threadsafe model evaluation is only needed when parallelising tilde-statements (not arbitrary Julia code), and avoiding it can often lead to significant performance improvements.
│
│ Please see https://turinglang.org/docs/usage/threadsafe-evaluation/ for more details of when threadsafe evaluation is actually required.
└ @ DynamicPPL ~/ppl/dppl/src/compiler.jl:383
f (generic function with 2 methods)

julia> x = randn(1000); unsafe_model = f(x);

julia> correct_logjoint(a) = logpdf(Normal(), a) + sum(logpdf.(Normal(a), x))
correct_logjoint (generic function with 1 method)

julia> correct_logjoint(0.5)
-1531.2868766878232

julia> logjoint(unsafe_model, (; a = 0.5))
-747.7508803635657

julia> logjoint(unsafe_model, (; a = 0.5))
-805.0710717034472

julia> logjoint(unsafe_model, (; a = 0.5))
-576.5499112280947

The above gives wrong results because threadsafe evaluation wasn't requested. This PR lets you do that:

julia> safe_model = setthreadsafe(unsafe_model, true)
Model{typeof(f), (:x,), (), (), Tuple{Vector{Float64}}, Tuple{}, DefaultContext, true}(f, (x = [1.7022725865773345, 0.24976102119304136, 0.3819156217493525, -2.4421990126257653, 1.0112387777431466, 0.0016047673674860264, -0.2252466150401273, 1.0744166063416623, -0.629808413260951, 0.041422940392546084    -0.06903422786623391, 0.7349873184231311, 1.1795733883972002, -0.8759768619489093, 1.1966743339419046, 0.850590281127217, 0.45075481294292064, -0.9911445478594456, 0.3609019495825119, 1.2313172493323474],), NamedTuple(), DefaultContext())

julia> logjoint(safe_model, (; a = 0.5))
-1531.286876687823

julia> logjoint(safe_model, (; a = 0.5))
-1531.286876687823

julia> logjoint(safe_model, (; a = 0.5))
-1531.286876687823

@penelopeysm penelopeysm changed the base branch from main to breaking November 20, 2025 14:40
@github-actions
Copy link
Contributor

github-actions bot commented Nov 20, 2025

Benchmark Report

  • this PR's head: e839de88ab838d3db0e595793729a4abb1f97bde
  • base branch: a6d56a2b9074d9da27eea4a6e4a2ab9a3013913f

Computer Information

Julia Version 1.11.7
Commit f2b3dbda30a (2025-09-08 12:10 UTC)
Build Info:
  Official https://julialang.org/ release
Platform Info:
  OS: Linux (x86_64-linux-gnu)
  CPU: 4 × Intel(R) Xeon(R) Platinum 8370C CPU @ 2.80GHz
  WORD_SIZE: 64
  LLVM: libLLVM-16.0.6 (ORCJIT, icelake-server)
Threads: 1 default, 0 interactive, 1 GC (on 4 virtual cores)

Benchmark Results

┌───────────────────────┬───────┬─────────────┬───────────────────┬────────┬─────────────────────────────────┬────────────────────────────┬─────────────────────────────────┐
│                       │       │             │                   │        │        t(eval) / t(ref)         │     t(grad) / t(eval)      │        t(grad) / t(ref)         │
│                       │       │             │                   │        │ ──────────┬───────────┬──────── │ ───────┬─────────┬──────── │ ──────────┬───────────┬──────── │
│                 Model │   Dim │  AD Backend │           VarInfo │ Linked │      base │   this PR │ speedup │   base │ this PR │ speedup │      base │   this PR │ speedup │
├───────────────────────┼───────┼─────────────┼───────────────────┼────────┼───────────┼───────────┼─────────┼────────┼─────────┼─────────┼───────────┼───────────┼─────────┤
│               Dynamic │    10 │    mooncake │             typed │   true │    400.15 │    372.71 │    1.07 │   9.71 │    8.98 │    1.08 │   3883.99 │   3345.54 │    1.16 │
│                   LDA │    12 │ reversediff │             typed │   true │   2647.33 │   2608.86 │    1.01 │   5.39 │    5.18 │    1.04 │  14259.47 │  13518.70 │    1.05 │
│   Loop univariate 10k │ 10000 │    mooncake │             typed │   true │ 108431.07 │ 111527.91 │    0.97 │   4.19 │    3.96 │    1.06 │ 453896.18 │ 441270.87 │    1.03 │
├───────────────────────┼───────┼─────────────┼───────────────────┼────────┼───────────┼───────────┼─────────┼────────┼─────────┼─────────┼───────────┼───────────┼─────────┤
│    Loop univariate 1k │  1000 │    mooncake │             typed │   true │   9169.16 │   7945.89 │    1.15 │   4.28 │    4.67 │    0.92 │  39235.95 │  37141.43 │    1.06 │
│      Multivariate 10k │ 10000 │    mooncake │             typed │   true │  33886.61 │  31865.54 │    1.06 │   9.91 │   10.25 │    0.97 │ 335657.94 │ 326490.16 │    1.03 │
│       Multivariate 1k │  1000 │    mooncake │             typed │   true │   3681.23 │   3472.68 │    1.06 │   9.29 │    9.55 │    0.97 │  34187.52 │  33148.73 │    1.03 │
├───────────────────────┼───────┼─────────────┼───────────────────┼────────┼───────────┼───────────┼─────────┼────────┼─────────┼─────────┼───────────┼───────────┼─────────┤
│ Simple assume observe │     1 │ forwarddiff │             typed │  false │      3.80 │      2.70 │    1.41 │   2.96 │    3.90 │    0.76 │     11.23 │     10.50 │    1.07 │
│           Smorgasbord │   201 │ forwarddiff │             typed │  false │   1221.81 │   1206.79 │    1.01 │  63.55 │  122.64 │    0.52 │  77649.38 │ 148000.70 │    0.52 │
│           Smorgasbord │   201 │ forwarddiff │       simple_dict │   true │       err │       err │     err │    err │     err │     err │       err │       err │     err │
├───────────────────────┼───────┼─────────────┼───────────────────┼────────┼───────────┼───────────┼─────────┼────────┼─────────┼─────────┼───────────┼───────────┼─────────┤
│           Smorgasbord │   201 │ forwarddiff │ simple_namedtuple │   true │       err │       err │     err │    err │     err │     err │       err │       err │     err │
│           Smorgasbord │   201 │      enzyme │             typed │   true │   1681.55 │   1694.01 │    0.99 │   5.84 │    5.70 │    1.03 │   9826.93 │   9651.74 │    1.02 │
│           Smorgasbord │   201 │    mooncake │             typed │   true │   1673.56 │   1695.58 │    0.99 │   5.40 │    5.21 │    1.04 │   9034.67 │   8825.54 │    1.02 │
├───────────────────────┼───────┼─────────────┼───────────────────┼────────┼───────────┼───────────┼─────────┼────────┼─────────┼─────────┼───────────┼───────────┼─────────┤
│           Smorgasbord │   201 │ reversediff │             typed │   true │   2010.82 │   1723.80 │    1.17 │  73.22 │   86.59 │    0.85 │ 147231.62 │ 149272.05 │    0.99 │
│           Smorgasbord │   201 │ forwarddiff │      typed_vector │   true │   1675.14 │   1679.82 │    1.00 │  57.47 │   55.00 │    1.05 │  96271.87 │  92382.62 │    1.04 │
│           Smorgasbord │   201 │ forwarddiff │           untyped │   true │   1663.29 │   1677.47 │    0.99 │ 120.71 │   55.83 │    2.16 │ 200780.30 │  93650.92 │    2.14 │
├───────────────────────┼───────┼─────────────┼───────────────────┼────────┼───────────┼───────────┼─────────┼────────┼─────────┼─────────┼───────────┼───────────┼─────────┤
│           Smorgasbord │   201 │ forwarddiff │    untyped_vector │   true │   1672.06 │   1661.01 │    1.01 │  58.75 │   54.68 │    1.07 │  98225.72 │  90829.94 │    1.08 │
│              Submodel │     1 │    mooncake │             typed │   true │      8.59 │      7.07 │    1.22 │   4.46 │    5.17 │    0.86 │     38.30 │     36.51 │    1.05 │
└───────────────────────┴───────┴─────────────┴───────────────────┴────────┴───────────┴───────────┴─────────┴────────┴─────────┴─────────┴───────────┴───────────┴─────────┘

@github-actions
Copy link
Contributor

DynamicPPL.jl documentation for PR #1151 is available at:
https://TuringLang.github.io/DynamicPPL.jl/previews/PR1151/

@codecov
Copy link

codecov bot commented Nov 20, 2025

Codecov Report

❌ Patch coverage is 97.56098% with 1 line in your changes missing coverage. Please review.
✅ Project coverage is 79.99%. Comparing base (a6d56a2) to head (e839de8).
⚠️ Report is 1 commits behind head on breaking.

Files with missing lines Patch % Lines
src/model.jl 96.00% 1 Missing ⚠️
Additional details and impacted files
@@             Coverage Diff              @@
##           breaking    #1151      +/-   ##
============================================
- Coverage     80.66%   79.99%   -0.67%     
============================================
  Files            41       41              
  Lines          3878     3874       -4     
============================================
- Hits           3128     3099      -29     
- Misses          750      775      +25     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

@penelopeysm penelopeysm requested a review from mhauru November 20, 2025 18:46
Copy link
Member

@mhauru mhauru left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Very happy with this. Only some minor style points to raise, and to wait for that docs page to be up.

src/fasteval.jl Outdated
Comment on lines 240 to 246
accs = map(
acc -> DynamicPPL.convert_eltype(float_type_with_fallback(eltype(params)), acc),
accs,
)
vi_wrapped = ThreadSafeVarInfo(OnlyAccsVarInfo(accs))
_, vi_wrapped = DynamicPPL._evaluate!!(model, vi_wrapped)
vi = OnlyAccsVarInfo(DynamicPPL.getaccs(vi_wrapped))
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Would it be simpler to have a single method and wrap this part in a if is_threaded(f.model)? It should get constant propagated away at compile time. Very optional to change, the current version isn't bad either.

This probably purely a style question, but there could be a difference in that listing all the type parameters of Model in the function signature I think forces specialisation on all of them.

Copy link
Member Author

@penelopeysm penelopeysm Nov 21, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, good idea, I was a bit concerned about the specialisation too. I thought about making threaded the first type parameter, which would also avoid this (and we don't rarely dispatch on any other type parameters in Model)... then decided against it because it might be too breaking

src/model.jl Outdated
args::NamedTuple{argnames,Targs},
defaults::NamedTuple{kwargnames,Tkwargs},
context::AbstractContext=DefaultContext(),
threadsafe::Bool=false,
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is there a particular meaning to sometimes calling this threadsafe and sometimes Threaded?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Threaded is a type parameter and threadsafe is a function argument 😄

A bit like Ctx and context.

Open to different names though!

src/compiler.jl Outdated
# If it's a macro, we expand it
if Meta.isexpr(expr, :macrocall)
return generate_mainbody!(mod, found, macroexpand(mod, expr; recursive=true), warn)
if expr.args[1] == Expr(:., :Threads, QuoteNode(Symbol("@threads"))) &&
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I wonder how often people do

using Threads: @threads
@threads

vs how often people call some other macro called @threads. I.e. false positives vs false negatives.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Another case where I wasn't too sure where to draw the line. I think Threads.@threads probably accounts for most usage, but I'm not averse to also handling @threads since after all the warning message is quite noncommittal.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would err on the safe side in warning about @threads, but happy if you prefer otherwise.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Agreed, changed now

@penelopeysm penelopeysm force-pushed the py/opt-in-tsvi branch 4 times, most recently from 0333445 to 27b3c23 Compare November 26, 2025 12:49
@penelopeysm
Copy link
Member Author

penelopeysm commented Nov 27, 2025

Current benchmarks locally, all run without threadsafe eval:

unlinked @model f() = x ~ Normal()
                This PR                               breaking
eval      ----  8.754 ns                              10.417 ns
grad (FD) ----  51.638 ns (3 allocs: 96 bytes)        55.319 ns (3 allocs: 96 bytes)
grad (RD) ----  3.069 μs (44 allocs: 1.500 KiB)       3.051 μs (44 allocs: 1.500 KiB)
grad (MC) ----  127.976 ns (2 allocs: 64 bytes)       136.688 ns (2 allocs: 64 bytes)
grad (EN) ----  59.902 ns (2 allocs: 64 bytes)        101.934 ns (2 allocs: 64 bytes)

unlinked DynamicPPL.TestUtils.DEMO_MODELS[3]
                This PR                               breaking
eval      ----  234.252 ns (8 allocs: 352 bytes)      235.276 ns (8 allocs: 352 bytes)
grad (FD) ----  425.373 ns (13 allocs: 752 bytes)     483.333 ns (13 allocs: 752 bytes)
grad (RD) ----  14.792 μs (228 allocs: 8.438 KiB)     15.375 μs (230 allocs: 8.578 KiB)
grad (MC) ----  1.321 μs (18 allocs: 800 bytes)       1.363 μs (18 allocs: 800 bytes)
grad (EN) ----  899.719 ns (22 allocs: 1008 bytes)    1.231 μs (25 allocs: 1.141 KiB)

linked @model f() = x ~ Normal()
                This PR                               breaking
eval      ----  11.820 ns (1 allocs: 32 bytes)        14.505 ns (1 allocs: 32 bytes)
grad (FD) ----  64.845 ns (4 allocs: 144 bytes)       66.176 ns (4 allocs: 144 bytes)
grad (RD) ----  3.162 μs (51 allocs: 1.750 KiB)       3.097 μs (52 allocs: 1.781 KiB)
grad (MC) ----  199.830 ns (4 allocs: 128 bytes)      198.053 ns (4 allocs: 128 bytes)
grad (EN) ----  96.390 ns (5 allocs: 160 bytes)       174.799 ns (6 allocs: 208 bytes)

linked DynamicPPL.TestUtils.DEMO_MODELS[3]
                This PR                               breaking
eval      ----  295.918 ns (12 allocs: 528 bytes)     299.479 ns (12 allocs: 528 bytes)
grad (FD) ----  558.962 ns (17 allocs: 1.094 KiB)     575.521 ns (17 allocs: 1.094 KiB)
grad (RD) ----  16.834 μs (251 allocs: 9.188 KiB)     16.917 μs (253 allocs: 9.297 KiB)
grad (MC) ----  1.691 μs (26 allocs: 1.125 KiB)       1.654 μs (26 allocs: 1.125 KiB)
grad (EN) ----  1.052 μs (28 allocs: 1.203 KiB)       1.402 μs (34 allocs: 1.469 KiB)

I don't really understand why the last one is slower (and it's reproducible). Will try to profile.

I was wrong; it's not reproducible. I benchmarked again and the last one was 295 ns, which seems about right (cutting out the Threads.nthreads() > 1 check seems to shave a few nanoseconds off). The table above has been updated.

Comment on lines 892 to 924
@inline function init!!(
function init!!(
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

re. #1139 (review)

Will the fragility of performance get better once TSVI is opt-in, and the num_threads > 1 check is replaced with something that can be evaluated at compile time? Also, even if it doesn't, I presume performance won't be worse than it was with the old LDF, just maybe roughly the same?

Turns out that yes, that helped :) So we can get rid of the @inline, AND also shift the convert_eltype thing inside the threadsafe bit. I checked with @code_typed and that stuff definitely gets compiled away if it's not needed.

# seems to provide an upper bound to maxthreadid(), so we use that here.
# See https://github.com/TuringLang/DynamicPPL.jl/pull/936
accs_by_thread = [map(split, getaccs(vi)) for _ in 1:(Threads.nthreads() * 2)]
accs_by_thread = [map(split, getaccs(vi)) for _ in 1:Threads.maxthreadid()]
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also very pleased to clean this up.

Comment on lines -427 to +430
# Force single-threaded execution.
_, varinfo = DynamicPPL.evaluate_threadunsafe!!(model, varinfo)
# TODO(penelopeysm): Implement merge, etc. for DebugAccumulator, and then perform a
# check on the merged accumulator, rather than checking it in the accumulate_assume
# calls. That way we can also correctly support multi-threaded evaluation.
_, varinfo = DynamicPPL.evaluate!!(model, varinfo)
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is the last thing that I'm a bit displeased about, but it's still better than on main (I wrote up an issue here #1157), so I thinkkkkk we can leave the proper fix to another PR.

@penelopeysm penelopeysm requested a review from mhauru November 27, 2025 14:19
Copy link
Member

@mhauru mhauru left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just a couple of trivialities.

Interesting how much Enzyme benchmarks care about this PR.

src/compiler.jl Outdated
expr.args[1] == Expr(:., :Threads, QuoteNode(Symbol("@threads"))) &&
!warn_threads
)
warn_threads = true
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Based on the name, I would have guessed this would work the other way around with true/false (I read "warn" as imperative).

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah, yes, let's flip it.

Comment on lines +51 to +53
When threadsafe evaluation is enabled for a model, an internal flag is set on the model.
The value of this flag can be queried using `DynamicPPL.requires_threadsafe(model)`, which returns a boolean.
This function is newly exported in this version of DynamicPPL.
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I realised that Turing needs this function (basically PG/SMC should error any time they encounter a model that needs threadsafe eval -- TuringLang/Turing.jl#2658), so we need to export it. Other changes just follow on from review!

Copy link
Member

@mhauru mhauru left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks!

@penelopeysm penelopeysm merged commit c27f5e0 into breaking Dec 1, 2025
20 of 21 checks passed
@penelopeysm penelopeysm deleted the py/opt-in-tsvi branch December 1, 2025 13:22
github-merge-queue bot pushed a commit that referenced this pull request Dec 2, 2025
* v0.39

* Update DPPL compats for benchmarks and docs

* remove merge conflict markers

* Remove `NodeTrait` (#1133)

* Remove NodeTrait

* Changelog

* Fix exports

* docs

* fix a bug

* Fix doctests

* Fix test

* tweak changelog

* FastLDF / InitContext unified (#1132)

* Fast Log Density Function

* Make it work with AD

* Optimise performance for identity VarNames

* Mark `get_range_and_linked` as having zero derivative

* Update comment

* make AD testing / benchmarking use FastLDF

* Fix tests

* Optimise away `make_evaluate_args_and_kwargs`

* const func annotation

* Disable benchmarks on non-typed-Metadata-VarInfo

* Fix `_evaluate!!` correctly to handle submodels

* Actually fix submodel evaluate

* Document thoroughly and organise code

* Support more VarInfos, make it thread-safe (?)

* fix bug in parsing ranges from metadata/VNV

* Fix get_param_eltype for TSVI

* Disable Enzyme benchmark

* Don't override _evaluate!!, that breaks ForwardDiff (sometimes)

* Move FastLDF to experimental for now

* Fix imports, add tests, etc

* More test fixes

* Fix imports / tests

* Remove AbstractFastEvalContext

* Changelog and patch bump

* Add correctness tests, fix imports

* Concretise parameter vector in tests

* Add zero-allocation tests

* Add Chairmarks as test dep

* Disable allocations tests on multi-threaded

* Fast InitContext (#1125)

* Make InitContext work with OnlyAccsVarInfo

* Do not convert NamedTuple to Dict

* remove logging

* Enable InitFromPrior and InitFromUniform too

* Fix `infer_nested_eltype` invocation

* Refactor FastLDF to use InitContext

* note init breaking change

* fix logjac sign

* workaround Mooncake segfault

* fix changelog too

* Fix get_param_eltype for context stacks

* Add a test for threaded observe

* Export init

* Remove dead code

* fix transforms for pathological distributions

* Tidy up loads of things

* fix typed_identity spelling

* fix definition order

* Improve docstrings

* Remove stray comment

* export get_param_eltype (unfortunatley)

* Add more comment

* Update comment

* Remove inlines, fix OAVI docstring

* Improve docstrings

* Simplify InitFromParams constructor

* Replace map(identity, x[:]) with [i for i in x[:]]

* Simplify implementation for InitContext/OAVI

* Add another model to allocation tests

Co-authored-by: Markus Hauru <[email protected]>

* Revert removal of dist argument (oops)

* Format

* Update some outdated bits of FastLDF docstring

* remove underscores

---------

Co-authored-by: Markus Hauru <[email protected]>

* implement `LogDensityProblems.dimension`

* forgot about capabilities...

* use interpolation in run_ad

* Improvements to benchmark outputs (#1146)

* print output

* fix

* reenable

* add more lines to guide the eye

* reorder table

* print tgrad / trel as well

* forgot this type

* Allow generation of `ParamsWithStats` from `FastLDF` plus parameters, and also `bundle_samples` (#1129)

* Implement `ParamsWithStats` for `FastLDF`

* Add comments

* Implement `bundle_samples` for ParamsWithStats -> MCMCChains

* Remove redundant comment

* don't need Statistics?

* Make FastLDF the default (#1139)

* Make FastLDF the default

* Add miscellaneous LogDensityProblems tests

* Use `init!!` instead of `fast_evaluate!!`

* Rename files, rebalance tests

* Implement `predict`, `returned`, `logjoint`, ... with `OnlyAccsVarInfo` (#1130)

* Use OnlyAccsVarInfo for many re-evaluation functions

* drop `fast_` prefix

* Add a changelog

* Improve FastLDF type stability when all parameters are linked or unlinked (#1141)

* Improve type stability when all parameters are linked or unlinked

* fix a merge conflict

* fix enzyme gc crash (locally at least)

* Fixes from review

* Make threadsafe evaluation opt-in (#1151)

* Make threadsafe evaluation opt-in

* Reduce number of type parameters in methods

* Make `warned_warn_about_threads_threads_threads_threads` shorter

* Improve `setthreadsafe` docstring

* warn on bare `@threads` as well

* fix merge

* Fix performance issues

* Use maxthreadid() in TSVI

* Move convert_eltype code to threadsafe eval function

* Point to new Turing docs page

* Add a test for setthreadsafe

* Tidy up check_model

* Apply suggestions from code review

Fix outdated docstrings

Co-authored-by: Markus Hauru <[email protected]>

* Improve warning message

* Export `requires_threadsafe`

* Add an actual docstring for `requires_threadsafe`

---------

Co-authored-by: Markus Hauru <[email protected]>

* Standardise `:lp` -> `:logjoint` (#1161)

* Standardise `:lp` -> `:logjoint`

* changelog

* fix a test

---------

Co-authored-by: Markus Hauru <[email protected]>
Co-authored-by: Markus Hauru <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants