I have a min and max position of an object and I want to represent an arbitrary point between them as a float between 0.0 and 1.0. This feels relatively basic math, but I can’t quite figure out what I need to do with this. Is there a special name for this sort of thing? Also, are there any built-in methods that would be useful for this?

  • TropicalDingdong@lemmy.world
    link
    fedilink
    arrow-up
    2
    ·
    edit-2
    1 year ago

    Well there is normalization , regularization, and standardization, but it basically depends on what you want to do and what implications that has for your data.

    X is the set and x is a value in that set.

    So:

    1 - { [ max(X) - x ] / [ max(X)- min(X) ] }

    or alternatively,

    [x- min(X)] / [max(X)-min(X)]

    Should do what you are asking, which sounds like normalization. That will normalize your values between 0 and 1. However, it wont do anything about your data being skewed to one side or the other. So the mean of this value won’t be 0.5, the halfway point between 0 and 1.

    If you want something like that, you will need to standardize your data prior to running the above algorithm:

    Something like:

    [ x - mean(X) ] / std(X)

    This will center your data around 0. If you then apply the first function (normalization), it should now be centered around 0.5 (even if it is not normally distributed).