-
Notifications
You must be signed in to change notification settings - Fork 91
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Integral comparisons and common_type are problematic #658
Comments
Hi @almania, thanks for great feedback 👍🏻 I am not sure if the framework should change the representation types by itself. I would prefer a compile-time error forcing a user to change the representation type before comparison or addition. Your ideas seem to be related to #303. @chiphogg implemented such things in Au already. Could you please check if this would be enough for your case, or do we need to think about a more complex solution? |
Thanks @mpusz, love the project obv. Somewhat choosing between I'm happy with compile-time errors, but preference would be to still allow comparisons - they can be made completely safe/accurate by checking limits first, and are such a nicety to have, albeit at some library code complexity. But that perhaps plays in to another thought I just had: if ambiguous/risky operations were disabled by requires clauses, would they still be overridable by importing a namespace providing alternative implementations? I feel that may simply be the case (if not, may require moving the member methods in to ADL space), but would allow nice extensibility in:
Would be the best of all worlds, really - users in these small embedded worlds can just import whatever operation modes they like, even those not supported directly by the library. I do agree with the complexity of changing reps, particularly when people are combining fixed point or accumulator types with the library - would likely require a lot of customization points to be practical. Which would be impractical. |
What I mean is that a user could type: read_adc().in<long>() == 5 * V; to make it compile.
I am not sure how you would like to make it work. Unfortunately, there is no global state you could mutate to change the behavior of the engine. Things like using SafeInt<int16_t> my_rep;
quantity<ADC_lsb, my_rep> q = my_rep{5} * V;
I am not afraid of the complexity here. It is not that big of a problem. The problem is that a user's type would be silently promoted to a bigger type without the user's knowledge. If such a result of adding two quantities would then be assigned back to the user's variable (e.g., Does it make sense? |
@chiphogg, the primary author of Au, contributes to this project and is a co-author of the ISO proposal. We work with other experts to provide the best possible library for C++ standardization. If you see any Au features that are important for your domain, are not part of this project, or are not proposed in the ISO C++ paper, please let us know so we can learn about your use cases and improve the proposal. |
Thanks for the callouts @mpusz! The overflow problem in C++ units libraries is near and dear to my heart. I think this Overflow doc is the most accessible survey of the problem I've seen. Note that the comparison problem that you raised, @almania, is the central example in the article --- well spotted! Here's a summary of my current thoughts on the matter.
So, basically: the ideal is to give the library an adaptive policy for which conversions are OK by default, and provide runtime-checked conversions for the cases where users are prepared to handle an error result. |
I'm certain to have missed some of the complexity of it, but comparisons could be made 100% accurate without changing the rep could they not? At least for the same origin case:
The difficulty in getting a generic implementation right does make it a good candidate for being a library feature I think, although whether it ought be the default behaviour or not is certainly a fair question.
I believe what I meant may to an extent 'just work':
Although the utility/sense of it I'm unsure, but it's maybe food for thought as to what solutions/workarounds are available.
Thank you for making me aware of this, I will have to try it out. |
Security is a hot topic in programming industry right now, and the C++ committee is catching up. For equality, the first thing you'd do is check whether the finer-grained value is a multiple of the coarser-grained one with a modulo. |
I think we would still want to use common units, regardless. We really want to avoid making I never thought of incorporating the runtime conversion checkers into the operator definitions. It's an interesting idea! That said, it seems likely to me that it would return |
I think currently the library does not allow lossy common_types (hence the integral scale factors), so comparison could be defined as if "performed on a common unit of infinite effective width". This is consistent with lossy conversions requiring explicit casts, so a comparison should not be introducing one. I think for equality of integral types Mostly, I feel
I think anything gracefully handling overflow, be it safe conversion functions or arith/comparisons, at least that or other customization points would be strictly required to even be implementable really. Although as regularly comes up, we really need something more along the lines of |
Thinking on it more, I think a partially specializable functor (pseudo):
Would be about required to implement accurate type-to-type conversions and infinite-width-comparisons for user-defined-types (fixed-pt etc), along with utilities for different cast modes - but even that proto wouldn't allow a hypothetical This is maybe a whole kettle of fish/scope creep that is better avoided. OTOH, I do believe integral type-to-type conversions need to be made accurate if integral types are to be supported at all, really - and achieving that (eg #615) would provide most if not all of the framework required for infinite-width comparisons as well. So if it's there... should it not be leveraged, at least for the built-in types? Edit: apologies, misclicked close. |
Thanks for your input. I will have to think about it when I have more free time. Right now, I am working hard on quantity specification conversions and vector and complex quantities. If someone would like to do some prototyping work on this, please feel free to jump in. |
The current specification of
<=>
reads:Those casts to
std::common_type_t
mean that for integral use, the user needs to be very aware of potential overflow on any comparison between units of different magnitudes.This is problematic, as as the
hw_example
points out, the library is otherwise well-suited for providing an interface to values representing all kinds of ranges. For instance this fairly simple "16 bit signed ADC, 121V max" definition:Introduces an implicit 121x multiply of the
int16_t
on the line below:Which means that a value as low as 1V on the ADC (which can read up to 121V) will trigger a signed integer overflow in that innocuous line, that checks if the ADC is above 5V.
And the problem only gets worse the more accurately you define the units, eg by dialling in the exact voltage divider you may end up with much larger magnitude adjustments required to get to the common_type with the base unit, depending on how 'irrational' they are:
As now, with more accurate scaling factors, our voltage comparison will overflow when the ADC is reading a numerical value of just +/-27, out of +/-32767, making 0.1V unsafe to compare against 5V (or 0V, for that matter). I believe this will be both dangerous, and rather surprising for users in its current state.
My thoughts:
long
).std::common_type
should likely similarly promote integer representations to the next bit-width if scaling is involved, at least up to that "big enough" bit width. If not this, it likely either shouldn't be defined, or its use of automatically generated scaling factors be very carefully considered, imo. This differs from usual C++, but for the much better. Yes, it does mean that adding two quantities of different units may produce a different rep vs two quantities of the same unit, but it's better to make the user aware of the risks of what's going on behind the scenes than silently failing imo.std::common_type
and/or implicit conversions are prohibited altogether (rather than increasing rep widths), comparisons could still be made completely accurate and safe via checking if either quantity is outside the representable range of the other quantity, and returning the appropriate ordering from that. Imo, this solution should be adopted even for "big enough" types, as it's strictly more accurate, at a slight runtime cost. Where values are known at compile-time, such as the5 * V
above, the compiler ought have an easy time producing optimal code at least, and the utility of it a dream.A question also: should conversions where
std::numeric_limits<rep>::max()
produces the same value as::min()
be prohibited altogether? eg anything mixing small integrals ofmV
andMV
could be detected at compile time, even more usefully for user derived types. Could make for a nice sanity check that you're dealing with the units you think you are, maybe.The text was updated successfully, but these errors were encountered: