One solution is to read the entire contents of the file as a string of characters with FSCANF , split the string into separate cells at the points where newlines appear using MAT2CELL , remove the extra free space at the ends with STRTRIM , then process the string data in each cell by as necessary. For example, using this sample text file 'junk.txt' :
hi hello 1 2 3 FF 00 FF 12 A6 22 20 20 20 FF FF FF
The following code will put each row in a cell in an array of cells cellData :
>> fid = fopen('junk.txt','r'); >> strData = fscanf(fid,'%c'); >> fclose(fid); >> nCharPerLine = diff([0 find(strData == char(10)) numel(strData)]); >> cellData = strtrim(mat2cell(strData,1,nCharPerLine)) cellData = 'hi' 'hello' '1 2 3' 'FF 00 FF' '12 A6 22 20 20 20' 'FF FF FF'
Now, if you want to convert all hexadecimal data (lines 3 through 6 into my sample data file) from strings to number vectors, you can use CELLFUN and SSCANF :
>> cellData(3:end) = cellfun(@(s) {sscanf(s,'%x',[1 inf])},cellData(3:end)); >> cellData{3:end} %
NOTE. . Since you are dealing with such large arrays, you need to remember the amount of memory used by your variables. The above solution is vectorized, but can take up a lot of memory. You may need to overwrite or clear large variables, such as strData when creating cellData . Alternatively, you can nCharPerLine over the elements in nCharPerLine and individually process each segment of the larger strData string into the vectors you need, which you can pre-distribute now that you know how many data lines you have (i.e. nDataLines = numel(nCharPerLine)-nHeaderLines; ).
source share