-
Notifications
You must be signed in to change notification settings - Fork 77
Description
I recently discovered this library and learned a lot from reading through it. I had an idea for an enhancement that could make the syntax of einsum more generic and closer to python's succinct syntax without compromise on performance.
// First, create a new UDL. Note that this requires a static_string class and CTAD, which is only
// available for C++20 and above. You'd need to add a feature macro to enable/disable this feature.
template<static_string expr>
[[nodiscard]] constexpr auto operator""_idx() noexcept -> make_indices_t<expr>
{
return {};
}
// Create a new einsum overload
template<typename IdxExpr, typename ... Ts>
[[nodiscard]] FASTOR_INLINE constexpr auto einsum(IdxExpr, Ts&& ...) -> decltype(auto) // I am just being lazy for the example
{
return /* dispatch to existing einsum functions */;
}
So, what does this buy us? Well, of course we could write functions like this:
auto c = einsum("ij,jk->ik"_idx, a, b);
But that doesn't really add anything new to the library. What would be interesting is to use this opportunity to pull in some of the other features that Numpy uses to support broadcasting operations by simply doing some string parsing at compile-time.
auto c = einsum("i...i"_idx, a);
auto d = einsum("ij...,jk...->ik..."_idx, a, b);
auto e = einsum("...ii->...i"_idx, a);
I think having a syntax like this enables a much more generic API. It does essentially impose an API that cannot extend beyond the number of symbols in the character set (whereas enum allows for much larger index counts), but I'm guessing there would be other limitations hit before this limitation is met.