Bash Read File Line by Line: 5 Methods Explained

March 17, 2026

Bash Read File Line by Line: 5 Methods Explained

March 17, 2026

Reading a file line by line is one of the most common tasks in Bash scripting, but getting it right requires understanding a few subtle pitfalls. The naive approach works for simple cases but breaks on files with spaces, backslashes, or missing trailing newlines. This guide covers 5 bash read file methods — from the safest general-purpose approach to specialized tools for arrays and stream processing — so you can pick the right one every time.

✓ RECOMMENDED Method 1: while read while IFS= read -r line; do echo "$line" done < "$file" handles all edge cases for loop (word-splits) Method 2: for loop for line in $( cat "$f" ); do echo "$line" done ⚠ splits on spaces
Method 1 (blue border) is the recommended approach — Method 2 breaks on filenames with spaces

1. while IFS= read -r (The Right Way)

This is the correct, portable approach for reading any file line by line:

bash
#!/bin/bash

file="/etc/hosts"

while IFS= read -r line; do
    echo "$line"
done < "$file"

Two flags make this work correctly for all cases:

  • IFS= (empty IFS) — prevents read from stripping leading and trailing whitespace from each line. Without it, lines beginning with spaces get trimmed.
  • -r (raw mode) — prevents read from treating backslashes as escape characters. Without it, a line like C:\Users\name becomes C:Usersname.

The < "$file" redirection at the end feeds the file into the while loop. This runs faster than piping through cat because it avoids spawning a subshell.

2. for Loop with cat (Avoid for Line Reading)

bash
#!/bin/bash

# AVOID for line-by-line reading — splits on spaces, not just newlines
for line in $(cat /etc/hosts); do
    echo "$line"
done

# A line like "127.0.0.1   localhost" becomes three iterations:
# "127.0.0.1", "localhost"
# The comment field and extra spaces disappear entirely

The problem: Bash performs word splitting on the output of $(cat file). It splits on any whitespace (spaces, tabs, newlines), not just newlines. Use this only if you genuinely want word-by-word iteration.

3. readarray / mapfile (Load into an Array)

When you need random access to lines or to process them multiple times, load the whole file into an array:

bash
#!/bin/bash

# mapfile and readarray are synonyms (Bash 4+)
mapfile -t lines < /etc/hosts

# Access by index
echo "Line 1: ${lines[0]}"
echo "Line 5: ${lines[4]}"
echo "Total lines: ${#lines[@]}"

# Loop over the array
for line in "${lines[@]}"; do
    echo "$line"
done

The -t flag strips the trailing newline character from each element. Without it every element ends with \n. This method requires Bash 4+ (not available on macOS default shell — use brew install bash or Method 1 instead).

4. Process Substitution

When you need to read from a command's output rather than a file:

#!/bin/bash

# Process substitution: read from a command's output line by line
while IFS= read -r line; do
    echo "Process: $line"
done < <(ps aux | grep nginx)

# Or read from a filtered file
while IFS= read -r line; do
    echo "$line"
done < <(grep -v '^#' /etc/hosts)  # skip comment lines

The <(command) syntax creates a temporary file descriptor. This avoids the subshell problem you get when piping into a while loop (where variables set inside the loop are lost after it ends).

5. awk for Complex Per-Line Processing

#!/bin/bash

# awk processes line by line automatically — best for structured data
# Print field 1 and field 3 from a colon-delimited file
awk -F: '{print $1, $3}' /etc/passwd

# Skip comment lines and blank lines
awk '!/^#/ && NF > 0 {print $0}' /etc/hosts

# Print lines that match a pattern
awk '/ERROR/ {print NR": "$0}' /var/log/app.log

Performance on Large Files

For files with millions of lines, awk is typically 5-10x faster than a Bash while read loop because awk is compiled C whereas the Bash loop has overhead per iteration. The while IFS= read -r approach is fine for files up to ~100K lines. Above that, prefer awk, sed, or grep for extraction tasks.

For more on file testing before you open a file, see the bash check if file exists guide. To process structured text output, the bash grep tutorial complements these methods well.

Summary

Use while IFS= read -r line; do ... done < "$file" as your default for line-by-line reading — it handles every edge case correctly. Use mapfile -t when you need array access to the lines. Reach for awk when processing structured data or when performance on large files matters.