I found the problem really interesting, so I decided to give it a try. I don't know about pythonic or natural, but I think I found a more accurate way to fit a dataset like yours using information from every point.
First of all, let it generate random data that looks like the one shown. This part can be easily skipped, I send it just so that the code is complete and reproducible. I used two two-dimensional normal distributions to simulate these superdendensities and sprinkled them with a layer of evenly distributed random points. Then they were added to a linear equation similar to yours, and everything under the line was cut off, and the final result looked like this: 
Here is a snippet of code to do this:
import numpy as np x_res = 1000 x_data = np.linspace(0, 2000, x_res)
Now that we have the data and the model, we can brainstorm how to set the edge of the point distribution. Commonly used regression methods, such as the non-linear least squares scipy.optimize.curve_fit , take data values y and optimize the model's free parameters so that the residual values ββbetween y and model(x) minimal. Non-linear least squares are an iterative process that at each step tries to change the parameters of the curve at each step in order to improve consistency at each step. Now itβs clear that this is one thing that we donβt want to do, because we want our minimization procedure to allow us to be as far from the best curve (but not too far).
Therefore, instead, consider the following function. Instead of simply returning the remainder, it will also βflipβ points over the curve at each iteration step and also take them into account. Thus, there are always more points on the curve than the higher, which leads to a shift of the curve with each iteration! As soon as the lowest points were reached, a minimum of the function was found, as well as the spread edge. Of course, this method assumes that you do not have emissions below the curve - but then your figure does not seem to subject them much.
Here are the functions that implement this idea:
def get_flipped(y_data, y_model): flipped = y_model - y_data flipped[flipped > 0] = 0 return flipped def flipped_resid(pars, x, y): """ For every iteration, everything above the currently proposed curve is going to be mirrored down, so that the next iterations is going to progressively shift downwards. """ y_model = model(x, *pars) flipped = get_flipped(y, y_model) resid = np.square(y + flipped - y_model)
Let's see how it looks above:
# plotting the mock data plt.plot(dset[0], dset[1], '.', alpha=0.2, label = 'Test data')
The most important part above is calling the leastsq function. Make sure that you are careful with the initial guesses - if the guess does not fall into the scatter, the model may not converge properly. Once it is appropriate to guess ...

Voila! The edge perfectly matches the real one.