May the source be with you, but remember the KISS principle ;-)
Home Switchboard Unix Administration Red Hat TCP/IP Networks Neoliberalism Toxic Managers
(slightly skeptical) Educational society promoting "Back to basics" movement against IT overcomplexity and  bastardization of classic Unix

split command

News Syntax Recommended Links Rcut Reference Pipes
Unix cat command csplit  command sort Tips Humor  
    Admin Horror Stories Unix History Humor Etc

To split large files into smaller files in Unix, use the split  command.  It also can be used as poor man editor (along with head and tail): you can split file at a given line and delete or replace one or several lines.

It essentially two programs in one:


split [options] filename prefix 

Replace filename with the name of the large file you wish to split. Replace prefix  with the name you wish to give the small output files. You can exclude [options ], or replace it with either of the following:

The split  command will give each output file it creates the name prefix  with an extension tacked to the end that indicates its order. By default, the split  command adds aa  to the first output file, proceeding through the alphabet to zz  for subsequent files. If you do not specify a prefix, most systems use  x .

Output fixed-size pieces of INPUT to PREFIXaa, PREFIXab, ...; default size is 1000 lines, and default PREFIX is 'x'. With no INPUT, or when INPUT is -, it reads standard input.


NOTE: Mandatory arguments to long options are mandatory for short options too.

use suffixes of length N (default 2)
put SIZE bytes per output file
put at most SIZE bytes of lines per output file
use numeric suffixes instead of alphabetic
put NUMBER lines per output file
print a diagnostic just before each output file is opened
display this help and exit
output version information and exit

SIZE may be (or may be an integer optionally followed by) one of following: KB 1000, K 1024, MB 1000*1000, M 1024*1024, and so on for G, T, P, E, Z, Y.


Split files into multiple files of specific size. This is most common operation on archives if they do not fit specified media. For example if you have  6GB and you can put only 4GB into each DVD

split my.log my.log. -d -b 1M

In this simple example, assume myfile  is 3,000 lines long: split myfile

This will output three 1000-line files: xaa, xab, and xac.

For more information, consult the man page for the split  command. At the Unix prompt, enter:

man split

csplit  command

There is also csplit  command, which splits files based on context. For more information, see the man page for the csplit  command. At the Unix prompt, enter:

man csplit

Top Visited
Past week
Past month


Old News ;-)

[Nov 08, 2017] Skipping null/empty fields caught by split()

Jun 01, 2019 |


I am attempting to parse a CSV, but am not allowed to install the CSV parsing module because of "security reasons" (what a joke), so I'm attempting to use 'split' to break up a comma-delimited file.

My issue is that as soon as an "empty" field comes up (two commas in a row), split seems to think the line is done and goes to the next one.

Everything I've read online says that split will return a null field, but I don't know how to get it to go to the next element and not just skip to the next line.

Expand|Select|Wrap|Line Numbers

  1. while (<INFILE>) {
  2. # use 'split' to avoid module-dependent functionality
  3. # split line on commas, OS info in [3] (4th group, but
  4. # counting starts first element at 0)
  5. # line = <textonly>,<text+num>,<ip>,<whatIwant>,
  6. chomp($_);
  7. @a_splitLine = split (/,/, $_);
  8. # move OS info out of string to avoid accidentally
  9. # parsing over stuff
  10. $s_info = $a_splitLine[3];
Could anyone see either a better way to accomplish what I'm trying to do, or help get split to capture all the elements?

I was thinking I could run a simple substitution before parsing of a known string (something ridiculous that'll never show up in my data - like &^%$#), then split, and then when printing, if that matches the current item, just print some sort of whitespace, but that doesn't sound like the best method to me - like I'm overcomplicating it.

Jun 19 '09 # 1

Expert Mod 100+

P: 465
My issue is that as soon as an "empty" field comes up (two commas in a row), split seems to think the line is done and goes to the next one .
No it doesn't. You have a flawed impression of what's happening.

Expand|Select|Wrap|Line Numbers

  1. C:\TEMP>type
  2. #!/usr/bin/perl
  3. use strict;
  4. use warnings;
  5. use Data::Dumper;
  6. my $str = 'a,,,b,,,,6,,';
  7. my @fields = split /,/, $str;
  8. print Dumper @fields;
Expand|Select|Wrap|Line Numbers
  1. C:\TEMP>
  2. $VAR1 = 'a';
  3. $VAR2 = '';
  4. $VAR3 = '';
  5. $VAR4 = 'b';
  6. $VAR5 = '';
  7. $VAR6 = '';
  8. $VAR7 = '';
  9. $VAR8 = '6';
Expand|Select|Wrap|Line Numbers
  1. C:\TEMP>perldoc -f split
  3. split /PATTERN/,EXPR
  4. split /PATTERN/
  5. split Splits the string EXPR into a list of strings and returns that
  6. list. By default, empty leading fields are preserved, and empty
  7. trailing ones are deleted. (If all fields are empty, they are
  8. considered to be trailing.)
  9. ....
  10. ....
  11. ....
Jun 19 '09 # 2


Expert Mod 2.5K+

P: 4,667

Interesting, so then how would I access the b or the 6?

  1. #!/bin/perl
  2. use strict;
  3. use warnings;
  4. use Data::Dumper;
  5. my $str = 'a,,,b,,,,6,,';
  6. my @fields = split /,/, $str;
  7. my $n = 0;
  8. print Dumper @fields;
  9. while ($fields[$n]) {
  10. print "$n: $fields[$n]\n";
  11. $n++;
  12. }
  13. print "done!\n";
Expand|Select|Wrap|Line Numbers
  1. $ ./
  2. $VAR1 = 'a';
  3. $VAR2 = '';
  4. $VAR3 = '';
  5. $VAR4 = 'b';
  6. $VAR5 = '';
  7. $VAR6 = '';
  8. $VAR7 = '';
  9. $VAR8 = '6';
  10. 0: a
  11. done!
In the above, my attempt to print with a while loop stops as soon as the first empty set is reached. I'm guessing I'd have to check each one to see which are valid and which are not, but what am I looking for - null?

Jun 19 '09 # 3


Expert Mod 100+

P: 465

If you know which field/index you want, then simply print that field.

If you want/need to loop over the array elements, then use a for or foreach loop, not a while loop.

Expand|Select|Wrap|Line Numbers
  1. for my $i ( 0..$#fields ) {
  2. # only print fields that have a value
  3. print "induce $i = '$fields[$i]'\n" if length $fields[$i];
  4. }
Jun 19 '09 # 4


Expert Mod 2.5K+

P: 3,496

I have to agree with Ron. Since this is a csv file, you should already know which field is what. All you would have to do is reference it by its index. Otherwise, you can use the code above to iterate through each one and pull out the variables with values other than null.



Jun 20 '09 # 5


Expert Mod 2.5K+

P: 4,667

Cool, thanks. I am really only interested in one of those fields, but then have to make sure once I edit that field, I re-append all the others back on, so I will play around with that.

Thanks again!

[Nov 23, 2009] Merging Split Files

Oct 11, 2007

Jesus asks: I have a question about the split Command… How can I "come back" to "largefile" from 126 small files?It sounds like the split command was used? If so, then you can use a for loop with file concatenation. First, I will split the files for my example:

$ ls -l big.log
-rw-rw-r--  1 brock brock 175743061 Oct 11 22:08 big.log
$ expr 175743061 \/ 126
$ split -b 1394786 big.log
$ ls -1 x*

Split creates its output files in decreasing alphabetic order. Thus when you list them in the shell they are in the order in which
they were split. As such, you can simply use cat to merge the files. (Thanks to Paul and Davidov for pointing out my for loop was superfluous.)

$ cat x* >merged.big.log
$ ls -l *.log
-rw-rw-r--  1 brock brock 175743061 Oct 12 22:08 big.log
-rw-rw-r--  1 brock brock 175743061 Oct 12 01:11 merged.big.log
$ diff merged.big.log big.log
$ md5sum *.log
47de08911534957c1768968743468307  big.log
47de08911534957c1768968743468307  merged.big.log

Recommended Links



Groupthink : Two Party System as Polyarchy : Corruption of Regulators : Bureaucracies : Understanding Micromanagers and Control Freaks : Toxic Managers :   Harvard Mafia : Diplomatic Communication : Surviving a Bad Performance Review : Insufficient Retirement Funds as Immanent Problem of Neoliberal Regime : PseudoScience : Who Rules America : Neoliberalism  : The Iron Law of Oligarchy : Libertarian Philosophy


War and Peace : Skeptical Finance : John Kenneth Galbraith :Talleyrand : Oscar Wilde : Otto Von Bismarck : Keynes : George Carlin : Skeptics : Propaganda  : SE quotes : Language Design and Programming Quotes : Random IT-related quotesSomerset Maugham : Marcus Aurelius : Kurt Vonnegut : Eric Hoffer : Winston Churchill : Napoleon Bonaparte : Ambrose BierceBernard Shaw : Mark Twain Quotes


Vol 25, No.12 (December, 2013) Rational Fools vs. Efficient Crooks The efficient markets hypothesis : Political Skeptic Bulletin, 2013 : Unemployment Bulletin, 2010 :  Vol 23, No.10 (October, 2011) An observation about corporate security departments : Slightly Skeptical Euromaydan Chronicles, June 2014 : Greenspan legacy bulletin, 2008 : Vol 25, No.10 (October, 2013) Cryptolocker Trojan (Win32/Crilock.A) : Vol 25, No.08 (August, 2013) Cloud providers as intelligence collection hubs : Financial Humor Bulletin, 2010 : Inequality Bulletin, 2009 : Financial Humor Bulletin, 2008 : Copyleft Problems Bulletin, 2004 : Financial Humor Bulletin, 2011 : Energy Bulletin, 2010 : Malware Protection Bulletin, 2010 : Vol 26, No.1 (January, 2013) Object-Oriented Cult : Political Skeptic Bulletin, 2011 : Vol 23, No.11 (November, 2011) Softpanorama classification of sysadmin horror stories : Vol 25, No.05 (May, 2013) Corporate bullshit as a communication method  : Vol 25, No.06 (June, 2013) A Note on the Relationship of Brooks Law and Conway Law


Fifty glorious years (1950-2000): the triumph of the US computer engineering : Donald Knuth : TAoCP and its Influence of Computer Science : Richard Stallman : Linus Torvalds  : Larry Wall  : John K. Ousterhout : CTSS : Multix OS Unix History : Unix shell history : VI editor : History of pipes concept : Solaris : MS DOSProgramming Languages History : PL/1 : Simula 67 : C : History of GCC developmentScripting Languages : Perl history   : OS History : Mail : DNS : SSH : CPU Instruction Sets : SPARC systems 1987-2006 : Norton Commander : Norton Utilities : Norton Ghost : Frontpage history : Malware Defense History : GNU Screen : OSS early history

Classic books:

The Peter Principle : Parkinson Law : 1984 : The Mythical Man-MonthHow to Solve It by George Polya : The Art of Computer Programming : The Elements of Programming Style : The Unix Hater’s Handbook : The Jargon file : The True Believer : Programming Pearls : The Good Soldier Svejk : The Power Elite

Most popular humor pages:

Manifest of the Softpanorama IT Slacker Society : Ten Commandments of the IT Slackers Society : Computer Humor Collection : BSD Logo Story : The Cuckoo's Egg : IT Slang : C++ Humor : ARE YOU A BBS ADDICT? : The Perl Purity Test : Object oriented programmers of all nations : Financial Humor : Financial Humor Bulletin, 2008 : Financial Humor Bulletin, 2010 : The Most Comprehensive Collection of Editor-related Humor : Programming Language Humor : Goldman Sachs related humor : Greenspan humor : C Humor : Scripting Humor : Real Programmers Humor : Web Humor : GPL-related Humor : OFM Humor : Politically Incorrect Humor : IDS Humor : "Linux Sucks" Humor : Russian Musical Humor : Best Russian Programmer Humor : Microsoft plans to buy Catholic Church : Richard Stallman Related Humor : Admin Humor : Perl-related Humor : Linus Torvalds Related humor : PseudoScience Related Humor : Networking Humor : Shell Humor : Financial Humor Bulletin, 2011 : Financial Humor Bulletin, 2012 : Financial Humor Bulletin, 2013 : Java Humor : Software Engineering Humor : Sun Solaris Related Humor : Education Humor : IBM Humor : Assembler-related Humor : VIM Humor : Computer Viruses Humor : Bright tomorrow is rescheduled to a day after tomorrow : Classic Computer Humor

The Last but not Least Technology is dominated by two types of people: those who understand what they do not manage and those who manage what they do not understand ~Archibald Putt. Ph.D

Copyright © 1996-2021 by Softpanorama Society. was initially created as a service to the (now defunct) UN Sustainable Development Networking Programme (SDNP) without any remuneration. This document is an industrial compilation designed and created exclusively for educational use and is distributed under the Softpanorama Content License. Original materials copyright belong to respective owners. Quotes are made for educational purposes only in compliance with the fair use doctrine.

FAIR USE NOTICE This site contains copyrighted material the use of which has not always been specifically authorized by the copyright owner. We are making such material available to advance understanding of computer science, IT technology, economic, scientific, and social issues. We believe this constitutes a 'fair use' of any such copyrighted material as provided by section 107 of the US Copyright Law according to which such material can be distributed without profit exclusively for research and educational purposes.

This is a Spartan WHYFF (We Help You For Free) site written by people for whom English is not a native language. Grammar and spelling errors should be expected. The site contain some broken links as it develops like a living tree...

You can use PayPal to to buy a cup of coffee for authors of this site


The statements, views and opinions presented on this web page are those of the author (or referenced source) and are not endorsed by, nor do they necessarily reflect, the opinions of the Softpanorama society. We do not warrant the correctness of the information provided or its fitness for any purpose. The site uses AdSense so you need to be aware of Google privacy policy. You you do not want to be tracked by Google please disable Javascript for this site. This site is perfectly usable without Javascript.

Last modified: November 10, 2020