Checking for NumPy dtypes - what's the least cool way to check values?

I want to test an unknown value with respect to the limitations that this NumPy dtype - for example, if I have an integer value, is it enough to fit in uint8 ?

As far as I can tell, the NumPy dtype architecture dtype not offer a way to do something like this:

 ### FICTIONAL NUMPY CODE: I made this up ### try: numpy.uint8.validate(rupees) except numpy.dtype.ValidationError: print "Users can't hold more than 255 rupees." 

My little fantasy API is based on Django models for checking the fields of a model , but this is just one example - the best mechanism that I was able to improve was in order

 >>> nd = numpy.array([0,0,0,0,0,0], dtype=numpy.dtype('uint8')) >>> nd[0] 0 >>> nd[0] = 1 >>> nd[0] = -1 >>> nd array([255, 0, 0, 0, 0, 0], dtype=uint8) >>> nd[0] = 257 >>> nd array([1, 0, 0, 0, 0, 0], dtype=uint8) 

Rounding off dubious values โ€‹โ€‹with numpy.ndarray , entered explicitly as numpy.uint8 , returns me integers that were wrapped with something of a suitable size without throwing an exception or raising any other valid error state.

I would prefer not to use the flight suit of the cosmonaut architect, of course, but this is the preferred alternative that looks like an unattainable mess-spaghetti-monster if dtype(this) ... elif dtype(that) . Is there anything I can do here besides embarking on the grandiose and condescending act of writing my own API?

+6
source share
3 answers

If a is your original iterable, you can do something in the following lines:

 np.all(np.array(a, dtype=np.int8) == a) 

Simply put, this compares the resulting ndarray with the original values โ€‹โ€‹and tells you if the conversion to ndarray lossless.

It will also catch things like using a floating point type too narrow to represent certain values โ€‹โ€‹exactly:

 >>> a = [0, 0, 0, 0, 0, 0.123456789] >>> np.all(np.array(a, dtype=np.float32) == a) False >>> np.all(np.array(a, dtype=np.float64) == a) True 

Edit: One caveat when using the above code with floating point numbers is that NaNs are always compared to unequal ones. If necessary, it is trivial to extend the code to handle this case.

+6
source

Take a look at numpy iinfo / finfo structs. They must provide all the information necessary for a validation service that works for elementary types. This does not work for composite or binary fields. You still have to implement a service skeleton for this.

+1
source

Try using numpy.seterr() with over to trigger warnings / errors on overflow.

eg.

 numpy.seterr(over='raise') 
0
source

All Articles