This discussion has been locked.
You can no longer post new replies to this discussion. If you have a question you can start a new discussion

Error while reading more than 65k data using fatfs

Hi, can any one plz help me to solve fatfs read problem?

I am using Cortex-M3 luminary lm3s6965 controller. My application is such that I am writing
100-150k data(1024 bytes at a time; i.e. using for loop), in single .csv file, in 4GB micro SD card using fat16. and it is working perferctly.

Bt when I am trying to read data from sd card(1024 bytes at a time), then it will read data upto 65535 bytes and then it gets hang.

So can i read complete 100-150k data from sd card.

Could any body help?
Thanks in advance...

Parents
  • I have tried reading sd card in different ways, but its not working... After reading 64K data using "f_read" function cause processor to hang.

    I am using "FatFs - FAT file system module R0.04b (C)ChaN, 2007".

    Here I have already saved ".csv" file into sd card, and I am just trying to read this .csv file.
    My sample code is given below:

    f_mount(0, &l_stFatFs);
    
    if(f_open(&lstRead_File, (const uint8_t *)&gbRD_SDFile_Name[0], FA_READ)==FR_OK)
    {
            ldLen = f_size(&lstRead_File);
            f_close(&lstRead_File);
            ldLen /= 1023;
    }
    
    for(gdIndex = 0; gdIndex < ldLen; gdIndex++)
    {
            stSDC.FileName = (uint8_t *)&gbRD_SDFile_Name[0];
            stSDC.Buffer = (char *)&gbFTP_Buffer[0];
            stSDC.Length = 1023;
            stSDC.Offset = gdData_Index;
            __disable_irq();
            if(!(MT_SDC_Read(&stSDC)))
                   gdData_Index += 1023;
            __enable_irq();
    }
    
    


    This one is not working. SO culd any one give me sample code to read and write to sd card using fatfs...

Reply
  • I have tried reading sd card in different ways, but its not working... After reading 64K data using "f_read" function cause processor to hang.

    I am using "FatFs - FAT file system module R0.04b (C)ChaN, 2007".

    Here I have already saved ".csv" file into sd card, and I am just trying to read this .csv file.
    My sample code is given below:

    f_mount(0, &l_stFatFs);
    
    if(f_open(&lstRead_File, (const uint8_t *)&gbRD_SDFile_Name[0], FA_READ)==FR_OK)
    {
            ldLen = f_size(&lstRead_File);
            f_close(&lstRead_File);
            ldLen /= 1023;
    }
    
    for(gdIndex = 0; gdIndex < ldLen; gdIndex++)
    {
            stSDC.FileName = (uint8_t *)&gbRD_SDFile_Name[0];
            stSDC.Buffer = (char *)&gbFTP_Buffer[0];
            stSDC.Length = 1023;
            stSDC.Offset = gdData_Index;
            __disable_irq();
            if(!(MT_SDC_Read(&stSDC)))
                   gdData_Index += 1023;
            __enable_irq();
    }
    
    


    This one is not working. SO culd any one give me sample code to read and write to sd card using fatfs...

Children
  • You haven't shown what data type your "gdData_Index" has.

    And this is probably the first time I have seen anyone walking through a large file with 1023 steps at a time. Why not use a block size of 2^n - 256, 512, 1024 etc? It's quite common that the driver layer has buffer schemes that works on 2^n-sized blocks. And memory cards, hard drives etc normally have sectors that are 2^n bytes large.

    Maybe you get an issue somewhere in the driver layer when your last read spans the 64kB boundary. If you use 1024-byte strides, then none of your reads will span the 64kB boundary - one read will fit before and the next read will start at offset 65536 = 2^16.

  • Well where's the code doing the actual reading? And why would you keep opening/closing the file? Interrupts disabled? Are you polling in you SD read code?

  • My application is to log Analog/Digital IO data to local as well as remote storage for data monitoring. I am writing to SD card after every 100mSec.
    Then after every 1hour i will read data from SD card, & upload it to remote server.

    At a time of reading sd card, i'll have to find the size of file; so for that i open the file then get the file size using "f_size" and then close the file.

    * "gdData_Index" data type is uint32_t.

    * If i try to read data in terms of 2^n bytes i.e. 1024 bytes then "f_read" function will return me 0 byte of data.

    * As SD card works on SPI, i'll have to disable all interrupts before accessing SPI to avoid system crash. once the SPI work gets over then again enable all interrupts.

  • My problem is still not resolved to read sd card having file size more than 64K.

    I have tried one more experiment: If my file size is >64K, then i read only 64k data continuosly and that code works fine.

    So what is real problem?
    Plz help.......... Give me some sample code...

  • "So what is real problem?"

    That you don't seem to like to perform any debugging.

    Haven't you taken a closer look at the low-level SPI code and seen if it does what it should?

    I'm not too impressed with an implementation that requires the interrupts to be disabled while you read 1023 bytes - that's a significant amount of time, which means serial ports, timers etc will not be properly serviced during this time. Maybe you don't use other time-critical peripherial hardware, but real-world applications normally do. Correct code should only need to turn off interrupts around a few processor instructions, when something needs to be performed atomically.

  • "So what is real problem?"

    That you don't seem to like to perform any debugging.

    Haven't you taken a closer look at the low-level SPI code and seen if it does what it should?

    I'm not too impressed with an implementation that requires the interrupts to be disabled while you read 1023 bytes - that's a significant amount of time, which means serial ports, timers etc will not be properly serviced during this time. Maybe you don't use other time-critical peripherial hardware, but real-world applications normally do. Correct code should only need to turn off interrupts around a few processor instructions, when something needs to be performed atomically.

    By the way - if 1024 is too large block size then I would have tried 512 instead of selecting 1023. It's so very often faster to use a block size that is 2^n bytes large when there is a lower layer that may contain buffering, or when there is hardware that is block-based.