For reading more than 15 lakh (1.5 million) records efficiently in PHP, you need to optimize memory usage and speed. Here’s a comprehensive recommendation:
Recommended Approach: Use JSON or CSV with Streaming
If you’re dealing with data files (like .json
, .csv
, or .txt
), use a streaming parser instead of loading the entire file into memory.
1. Best File Format: CSV (Highly Recommended)
Why CSV?
- Compact file size
- Faster to parse than JSON
- Supported natively in PHP
- Easy to stream line-by-line
PHP Code to Read Large CSV Efficiently:
$handle = fopen("large-data.csv", "r");
if ($handle) {
while (($row = fgetcsv($handle)) !== false) {
// Process each $row
}
fclose($handle);
} else {
echo "File not found!";
}
This reads line-by-line, so memory usage stays low.
2. If JSON is a Must: Use Streaming JSON Parser
❌ Do NOT use json_decode(file_get_contents(...))
for large files — it loads entire file in memory.
Use JsonMachine
Install via Composer:
bashCopyEditcomposer require halaxa/json-machine
PHP Code Example:
phpCopyEdituse JsonMachine\JsonMachine;
require 'vendor/autoload.php';
$jsonStream = JsonMachine::fromFile('large-data.json');
foreach ($jsonStream as $key => $value) {
// Process $value
}
This reads JSON incrementally — perfect for 15+ lakh records.
3. Consider Using a Database Instead
If you’re reading this much data repeatedly or filtering/searching it:
✅ Import it into MySQL/PostgreSQL
✅ Use LIMIT
+ OFFSET
or cursor-based pagination
✅ Use mysqli
or PDO
with fetch modes like fetch()
or fetchAll()
depending on memory.
Summary Table
Format | Speed | Memory Usage | Ease of Use | Recommended |
---|---|---|---|---|
CSV | ✅✅✅ | ✅✅✅ | ✅✅✅ | ⭐⭐⭐⭐⭐ |
JSON | ✅ | ✅✅ (if streaming) | ✅ | ⭐⭐⭐ |
Database | ✅✅✅ | ✅✅✅ | ✅✅ | ⭐⭐⭐⭐⭐ |
Bonus Tips:
- Always monitor memory:
memory_get_usage()
- For extreme cases, use
generators
(yield
) - Use
ini_set('memory_limit', '512M')
if needed (careful!)