Why shouldn't the data be changed in an HTTP GET request?

I know that using non-GET methods (POST, PUT, DELETE) to modify server data is the right way to do something. I can find several resources, arguing that GET requests should not modify resources on the server.

However, if a client came up to me today and said: "I don’t care if the" Correct way "is to do this, it’s easier for us to use your API, if we can just use the URLs and get some XML back, we don’t want to create HTTP requests and POST / PUT XML, "what reasons for doing business could I give to convince them otherwise?

Are there any implications for caching? Safety problems? I was looking for more than just “it doesn't make sense semantically” or “it makes things ambiguous”.

Edit:

Thanks for the answers so far regarding prefetching. I'm not so interested in prefetching, since it mainly concerns the use of the internal network API and non-visited HTML pages that have links that can be pre-selected by the browser.

+25
get
Apr 01 '09 at 14:27
source share
7 answers
  • Prefetch: Many web browsers will use prefetch. This means that it will load the page before clicking on the link. Assuming you click on this link later.
  • Bots: There are several bots that crawl and index the Internet for information. They will only issue GET requests. For this reason, you do not want to remove anything from the GET request.
  • Caching: GET HTTP requests should not change state, and they should be idempotent. Idempotent means that issuing a request once or issuing it several times gives the same result. That is, there are no side effects. For this reason, HTTP GET requests are closely related to caching.
  • The HTTP standard says so . The HTTP standard says what each HTTP method is for. Several programs are built to use the HTTP standard, and they assume that you will use it the way you should. This way you will have undefined behavior from many random programs if you do not execute.
+42
Apr 01 '09 at 14:29
source

How about Google finding a link to this page with all the GET parameters in the URL and reviewing it from time to time? This can lead to disaster.

There's a fun article about it on the Daily WTF .

+6
Apr 01 '09 at 14:31
source

GETs can be forcibly applied to a user and, as a result, receive a cross-site request request routine (CSRF). For example, if you have a logout function http://example.com/logout.php that changes the state of a user’s server, an attacker can place an image tag on any site that uses the above URL as a source: http: // example.com/logout.php . Downloading this code will lead the user out of the system. In the above example, not so much, but if it were a command to transfer funds from an account, that would be a big deal.

+5
Apr 01 '09 at 14:34
source

Good reasons to do it right ...

They are an industry standard, well documented and easily protected. While you fully support the simplification of life for the client as much as possible, you do not want to implement something easier in the short term, preferring something that is not so simple for them, but offers long-term benefits.

One of my favorite quotes.

Fast and dirty ... long after Quick left the dirty remains.

For you, this one word "Stitching in time saves nine";)

+2
Apr 01 '09 at 14:43
source

Security for one. What happens if a web crawler lands in a delete link or a user is tricked by clicking a hyperlink? The user must know what they are doing before they do it.

+1
Apr 01 '09 at 14:31
source

Security: CSRF is much simpler in GET requests.

Using POST will not protect you in any way, but GET can facilitate the operation and mass operation using forums and places that accept image tags.

Depending on what you do on the server side using GET, you can help an attacker launch DoS (denial of service) . An attacker can spam thousands of sites with your expensive GET request in the image tag, and each visitor to these sites will execute this expensive GET request to your web server. This will cause many processor cycles.

I know that some pages are heavy anyway, and this is always a risk, but it is a greater risk if you add 10 large entries to each GET request.

+1
Apr 01 '09 at 15:03
source

I kind of looked for more than just “it doesn't make sense semantically” or “it makes things ambiguous”.

...

I don’t care that the right way to do things is easier for us

Tell them to think about the worst API they have ever used. Can they not imagine how this was caused by a quick hack that was expanded?

After 2 months it will be easier (and cheaper) if you start with something that makes sense semantically. We call it the “right way” because it makes life easier, and not because we want to torment you.

+1
Apr 24 '09 at 15:05
source



All Articles