I can suggest a way to reduce your equation to an integral equation that can be solved numerically by approximating its core with a matrix, thereby reducing the integration to matrix multiplication.
First, it’s clear that the equation can be integrated twice over x , first from 1 to x , and then from 0 to x , so that:

Now we can discretize this equation by putting it on an equidistant grid:

Here A[x] becomes a vector, and the integrated core iniIntK becomes a matrix, and integration is replaced by matrix multiplication. Then the task is reduced to a system of linear equations.
The simplest case (which I will consider here) is when the iniIntK kernel can be obtained analytically - in this case this method will be quite fast. Here is the function to create an integrated kernel as a pure function:
Clear[computeDoubleIntK] computeDoubleIntK[kernelF_] := Block[{x, x1}, Function[ Evaluate[ Integrate[ Integrate[kernelF[y, x1], {y, 1, x}] /. x -> y, {y, 0, x}] /. {x ->
In our case:
In[99]:= K[x_,x1_]:=1; In[100]:= kernel = computeDoubleIntK[K] Out[100]= -
Here is the function to create the kernel matrix and the vector rh, s:
computeDiscreteKernelMatrixAndRHS[intkernel_, a0_, aprime1_ , delta_, interval : {_, _}] := Module[{grid, rhs, matrix}, grid = Range[Sequence @@ interval, delta]; rhs = a0 + aprime1*grid; (* constant plus a linear term *) matrix = IdentityMatrix[Length[grid]] - delta*Outer[intkernel, grid, grid]; {matrix, rhs}]
To give a very rough idea of how this might look (I use delta = 1/2 ):
In[101]:= computeDiscreteKernelMatrixAndRHS[kernel,0,1,1/2,{0,1}] Out[101]= {{{1,0,0},{3/16,19/16,3/16},{1/4,1/4,5/4}},{0,1/2,1}}
Now we need to solve the linear equation and interpolate the result, which is performed using the following function:
Clear[computeSolution]; computeSolution[intkernel_, a0_, aprime1_ , delta_, interval : {_, _}] := With[{grid = Range[Sequence @@ interval, delta]}, Interpolation@Transpose [{ grid, LinearSolve @@ computeDiscreteKernelMatrixAndRHS[intkernel, a0, aprime1, delta,interval] }]]
Here I will name it with delta = 0.1 :
In[90]:= solA = computeSolution[kernel,0,1,0.1,{0,1}] Out[90]= InterpolatingFunction[{{0.,1.}},<>]
Now we will build the result in comparison with the exact analytical solution found by @Sasha, as well as with an error:

I deliberately chose delta large enough to make errors visible. If you selected delta say 0.01 , the graphs will be visually identical. Of course, the price of making a smaller delta is the need to create and solve large matrices.
For kernels that can be obtained analytically, the main bottleneck will be in LinearSolve , but in practice it is pretty fast (for matrices it is not too large). When kernels cannot be integrated analytically, the main bottleneck will be the calculation of the kernel at many points (matrix creation. The inverse matrix has great asymptotic complexity, but this will begin to play a role for really large matrices - which are not needed in this approach, since it can be combined with iterative - see below). Usually you define:
intK[x_?NumericQ, x1_?NumericQ] := NIntegrate[K[y, x1], {y, 1, x}] intIntK[x_?NumericQ, x1_?NumericQ] := NIntegrate[intK[z, x1], {z, 0, x}]
As a way to speed it up in such cases, you can pre-copy the intK core to the grid and then interpolate the same for intIntK . This, however, will lead to additional errors that you will have to evaluate (consider).
The grid itself does not have to be equidistant (I just used it for simplicity), but it can (and probably should) be adaptive and, as a rule, uneven.
As a final illustration, we consider an equation with a nontrivial, but symbolically integrable kernel:
In[146]:= sinkern = computeDoubleIntK[50*Sin[Pi/2*(#1-#2)]&] Out[146]= (100 (2 Sin[1/2 \[Pi] (-#1+#2)]+Sin[(\[Pi] #2)/2] (-2+\[Pi] #1)))/\[Pi]^2& In[157]:= solSin = computeSolution[sinkern,0,1,0.01,{0,1}] Out[157]= InterpolatingFunction[{{0.,1.}},<>]

Here are a few checks:
In[163]:= Chop[{solSin[0],solSin'[1]}] Out[163]= {0,1.} In[153]:= diff[x_?NumericQ]:= solSin''[x] - NIntegrate[50*Sin[Pi/2*(#1-#2)]&[x,x1]*solSin[x1],{x1,0,1}]; In[162]:= diff/@Range[0,1,0.1] Out[162]= {-0.0675775,-0.0654974,-0.0632056,-0.0593575,-0.0540479,-0.0474074, -0.0395995,-0.0308166,-0.0212749,-0.0112093,0.000369261}
In conclusion, I just want to emphasize that for this method it is necessary to conduct a thorough analysis of the error assessment, which I did not do.
EDIT
You can also use this method to obtain an initial approximate solution, and then iteratively improve it using FixedPoint or other means - this way you will have relatively fast convergence and be able to achieve the required accuracy without having to build and solve huge matrices.