disk_log
A disk based term logging facility
disk_log
is a disk based term logger which makes
it possible to efficiently log items on files.
Two types of logs are supported,
halt logs and wrap logs. A halt log
appends items to a single file, the size of which may or may
not be limited by the disk log module, whereas a wrap log utilizes
a sequence of wrap log files of limited size. As a wrap log file
has been filled up, further items are logged onto to the next
file in the sequence, starting all over with the first file when
the last file has been filled up. For the sake of efficiency,
items are always written to files as binaries.
Two formats of the log files are supported, the internal format and the external format. The internal format supports automatic repair of log files that have not been properly closed, and makes it possible to efficiently read logged items in chunks using a set of functions defined in this module. In fact, this is the only way to read internally formatted logs. The external format leaves it up to the user to read the logged deep byte lists. The disk log module cannot repair externally formatted logs. An item logged to an internally formatted log must not occupy more than 4 GB of disk space (the size must fit in 4 bytes).
For each open disk log there is one process that handles requests
made to the disk log; the disk log process is created when open/1
is called, provided there exists no process handling the disk log.
A process that opens a disk log can either be an owner
or an anonymous user of the disk log. Each owner is
linked to the disk log
process, and the disk log is closed by the owner should the
owner terminate. Owners can subscribe to notifications,
messages of the form {disk_log, Node, Log, Info}
that are sent
from the disk log process when certain events occur, see
the commands below and in particular the open/1
option
notify.
There can be several owners of a log, but a process cannot own a
log more than once. One and the same process may, however,
open the log
as a user more than once. For a disk log process to properly close
its file and terminate, it must be closed by its owners and once by
some non-owner process for each time the log was used anonymously;
the users are counted, and there must not be any users left when the
disk log process terminates.
Items can be logged synchronously by using the functions
log/2
, blog/2
, log_terms/2
and
blog_terms/2
. For each of these functions, the caller is put
on hold until the items have been logged (but not necessarily
written, use sync/1
to ensure that). By adding an a
to each of the mentioned function names we get functions that log
items asynchronously. Asynchronous functions do not wait for
the disk log process to actually write the items to the file, but
return the control to the caller more or less immediately.
When using the internal format for logs, the functions
log/2
, log_terms/2
, alog/2
, and
alog_terms/2
should be used. These functions log one or
more Erlang terms. By prefixing each of the functions with
a b
(for "binary") we get the corresponding blog
functions for the external format. These functions log one or
more deep lists of bytes or, alternatively, binaries of deep lists
of bytes.
For example, to log the string "hello"
in ASCII format, we
can use disk_log:blog(Log, "hello")
, or
disk_log:blog(Log, list_to_binary("hello"))
. The two
alternatives are equally efficient. The blog
functions
can be used for internally formatted logs as well, but in
this case they must be called with binaries constructed with
calls to term_to_binary/1
. There is no check to ensure
this, it is entirely the responsibility of the caller. If these
functions are called with binaries that do not correspond to
Erlang terms, the chunk/2,3
and automatic repair
functions will fail. The corresponding terms (not the binaries)
will be returned when chunk/2,3
is called.
A collection of open disk logs with the same name running on
different nodes is said to be a a distributed disk log
if requests made to any one of the logs are automatically made to
the other logs as well. The members of such a collection will be
called individual distributed disk logs, or just distributed
disk logs if there is no risk of confusion. There is no order
between the members of such a collection. For instance, logged
terms are not necessarily written onto the node where the
request was made before written onto the other nodes. One could
note here that there are a few functions that do not make
requests to all members of distributed disk logs, namely
info
, chunk
, bchunk
, chunk_step
and
lclose
. An open disk log that is not a distributed disk
log is said to be a local disk log. A local disk log is
accessible only from the node where the disk log process runs,
whereas a distributed disk log is accessible from all nodes in
the Erlang system, with exception for those nodes where a local
disk log with the same name as the distributed disk log exists.
All processes on nodes that have access to a local or
distributed disk log can log items or otherwise change, inspect
or close the log.
It is not guaranteed that all log files of a distributed disk log
contain the same log items; there is no attempt made to synchronize
the contents of the files. However, as long as at least one of
the involved nodes is alive at each time, all items will be logged.
When logging items to a distributed log, or otherwise trying to
change the log, the replies from individual logs are
ignored. If all nodes are down, the disk log functions
reply with a nonode
error.
Note!
In some applications it may not be acceptable that replies from individual logs are ignored. An alternative in such situations is to use several local disk logs instead of one distributed disk log, and implement the distribution without use of the disk log module.
Errors are reported differently for asynchronous log attempts
and other uses of the disk log module. When used synchronously
the disk log module replies with an error message, but when called
asynchronously, the disk log module does not know where to send
the error message. Instead owners subscribing to notifications will
receive an error_status
message.
The disk log module itself does not report errors to the
error_logger
module; it is up to the caller to decide
whether the error logger should be employed or not. The function
format_error/1
can be used to produce readable messages
from error replies. Information events are however sent to the
error logger in two situations, namely when a log is repaired,
or when a file is missing while reading chunks.
The error message no_such_log
means that the given
disk log is not currently open. Nothing is said about
whether the disk log files exist or not.
Note!
If an attempt to reopen or truncate a log fails (see
reopen
and truncate
) the disk log process
immediately terminates. Before the process terminates links to
to owners and blocking processes (see block
) are removed.
The effect is that the links work in one direction only; any
process using a disk log has to check for the error message
no_such_log
if some other process might truncate or
reopen the log simultaneously.
Types
log() = term()
dlog_size() =
infinity |
integer() >= 1 |
{MaxNoBytes :: integer() >= 1, MaxNoFiles :: integer() >= 1}
dlog_format() = external | internal
dlog_head_opt() = none | term() | binary() | [dlog_byte()]
dlog_byte() = [dlog_byte()] | byte()
dlog_mode() = read_only | read_write
dlog_type() = halt | wrap
continuation()
Chunk continuation returned by
chunk/2,3
, bchunk/2,3
, or chunk_step/3
.
bytes() = binary() | [byte()]
invalid_header() = term()
file_error() = term()
Functions
accessible_logs() -> {[LocalLog], [DistributedLog]}
LocalLog = DistributedLog = log()
The accessible_logs/0
function returns
the names of the disk logs accessible on the current node.
The first list contains local disk logs, and the
second list contains distributed disk logs.
alog(Log, Term) -> notify_ret()
Log = log()
Term = term()
balog(Log, Bytes) -> notify_ret()
notify_ret() = ok | {error, no_such_log}
The alog/2
and balog/2
functions asynchronously
append an item to a disk log. The function alog/2
is
used for internally formatted logs, and the function balog/2
for externally formatted logs. balog/2
can be used
for internally formatted logs as well provided the binary was
constructed with a call to term_to_binary/1
.
The owners that subscribe to notifications will receive the
message read_only
, blocked_log
or format_external
in case the item cannot be written
on the log, and possibly one of the messages wrap
,
full
and error_status
if an item was written
on the log. The message error_status
is sent if there
is something wrong with the header function or a file error
occurred.
alog_terms(Log, TermList) -> notify_ret()
Log = log()
TermList = [term()]
balog_terms(Log, ByteList) -> notify_ret()
notify_ret() = ok | {error, no_such_log}
The alog_terms/2
and balog_terms/2
functions
asynchronously append a list of items to a disk log.
The function alog_terms/2
is used for internally
formatted logs, and the function balog_terms/2
for externally formatted logs. balog_terms/2
can be used
for internally formatted logs as well provided the binaries were
constructed with calls to term_to_binary/1
.
The owners that subscribe to notifications will receive the
message read_only
, blocked_log
or format_external
in case the items cannot be written
on the log, and possibly one or more of the messages wrap
,
full
and error_status
if items were written
on the log. The message error_status
is sent if there
is something wrong with the header function or a file error
occurred.
block(Log) -> ok | {error, block_error_rsn()}
Log = log()
block(Log, QueueLogRecords) -> ok | {error, block_error_rsn()}
Log = log()
QueueLogRecords = boolean()
block_error_rsn() = no_such_log | nonode | {blocked_log, log()}
With a call to block/1,2
a process can block a log.
If the blocking process is not an owner of the log, a temporary
link is created between the disk log process and the blocking
process. The link is used to ensure that the disk log is
unblocked should the blocking process terminate without
first closing or unblocking the log.
Any process can probe a blocked log with info/1
or
close it with close/1
. The blocking process can also
use the functions chunk/2,3
, bchunk/2,3
,
chunk_step/3
, and unblock/1
without being
affected by the block. Any other attempt than those hitherto
mentioned to update or read a blocked log suspends the
calling process until the log is unblocked or returns an
error message {blocked_log,
, depending on
whether the value of
is true
or false
. The default value of
is true
, which is used by block/1
.
change_header(Log, Header) -> ok | {error, Reason}
Log = log()
Header =
{head, dlog_head_opt()} |
{head_func, MFA :: {atom(), atom(), list()}}Reason =
no_such_log |
nonode |
{read_only_mode, Log} |
{blocked_log, Log} |
{badarg, head}
The change_header/2
function changes the value of
the head
or head_func
option of a disk log.
change_notify(Log, Owner, Notify) -> ok | {error, Reason}
Log = log()
Owner = pid()
Notify = boolean()
Reason =
no_such_log |
nonode |
{blocked_log, Log} |
{badarg, notify} |
{not_owner, Owner}
The change_notify/3
function changes the value of the
notify
option for an owner of a disk log.
change_size(Log, Size) -> ok | {error, Reason}
Log = log()
Size = dlog_size()
Reason =
no_such_log |
nonode |
{read_only_mode, Log} |
{blocked_log, Log} |
{new_size_too_small, CurrentSize :: integer() >= 1} |
{badarg, size} |
{file_error, file:filename(), file_error()}
The change_size/2
function changes the size of an open log.
For a halt log it is always possible to increase the size,
but it is not possible to decrease the size to something less than
the current size of the file.
For a wrap log it is always possible to increase both the size and number of files, as long as the number of files does not exceed 65000. If the maximum number of files is decreased, the change will not be valid until the current file is full and the log wraps to the next file. The redundant files will be removed next time the log wraps around, i.e. starts to log to file number 1.
As an example, assume that the old maximum number of files is 10 and that the new maximum number of files is 6. If the current file number is not greater than the new maximum number of files, the files 7 to 10 will be removed when file number 6 is full and the log starts to write to file number 1 again. Otherwise the files greater than the current file will be removed when the current file is full (e.g. if the current file is 8, the files 9 and 10); the files between new maximum number of files and the current file (i.e. files 7 and 8) will be removed next time file number 6 is full.
If the size of the files is decreased the change will immediately affect the current log. It will not of course change the size of log files already full until next time they are used.
If the log size is decreased for instance to save space,
the function inc_wrap_file/1
can be used to force the log
to wrap.
chunk(Log, Continuation) -> chunk_ret()
Log = log()
Continuation = start | continuation()
chunk(Log, Continuation, N) -> chunk_ret()
Log = log()
Continuation = start | continuation()
N = integer() >= 1 | infinity
bchunk(Log, Continuation) -> bchunk_ret()
Log = log()
Continuation = start | continuation()
bchunk(Log, Continuation, N) -> bchunk_ret()
Log = log()
Continuation = start | continuation()
N = integer() >= 1 | infinity
chunk_ret() =
{Continuation2 :: continuation(), Terms :: [term()]} |
{Continuation2 :: continuation(),
Terms :: [term()],
Badbytes :: integer() >= 0} |
eof |
{error, Reason :: chunk_error_rsn()}
bchunk_ret() =
{Continuation2 :: continuation(), Binaries :: [binary()]} |
{Continuation2 :: continuation(),
Binaries :: [binary()],
Badbytes :: integer() >= 0} |
eof |
{error, Reason :: chunk_error_rsn()}
chunk_error_rsn() =
no_such_log |
{format_external, log()} |
{blocked_log, log()} |
{badarg, continuation} |
{not_internal_wrap, log()} |
{corrupt_log_file, FileName :: file:filename()} |
{file_error, file:filename(), file_error()}
The chunk/2,3
and bchunk/2,3
functions make
it possible to efficiently read the terms which have been
appended to an internally formatted log. It minimizes disk
I/O by reading 64 kilobyte chunks from the file. The
bchunk/2,3
functions return the binaries read from
the file; they do not call binary_to_term
. Otherwise
the work just like chunk/2,3
.
The first time chunk
(or bchunk
) is called,
an initial continuation, the atom start
, must be
provided. If there is a disk log process running on the
current node, terms are read from that log, otherwise an
individual distributed log on some other node is chosen, if
such a log exists.
When chunk/3
is called,
controls the
maximum number of terms that are read from the log in each
chunk. Default is infinity
, which means that all the
terms contained in the 64 kilobyte chunk are read. If less than
terms are returned, this does not necessarily mean
that the end of the file has been reached.
The chunk
function returns a tuple
{
, where
is a list
of terms found in the log.
is yet
another continuation which must be passed on to any
subsequent calls to chunk
. With a series of calls to
chunk
it is possible to extract all terms from a log.
The chunk
function returns a tuple
{
if the log is opened
in read-only mode and the read chunk is corrupt.
is the number of bytes in the file which were found not to be
Erlang terms in the chunk. Note also that the log is not repaired.
When trying to read chunks from a log opened in read-write mode,
the tuple {corrupt_log_file,
is returned if the
read chunk is corrupt.
chunk
returns eof
when the end of the log is
reached, or {error,
if an error occurs. Should
a wrap log file be missing, a message is output on the error log.
When chunk/2,3
is used with wrap logs, the returned
continuation may or may not be valid in the next call to
chunk
. This is because the log may wrap and delete
the file into which the continuation points. To make sure
this does not happen, the log can be blocked during the
search.
chunk_info(Continuation) -> InfoList | {error, Reason}
Continuation = continuation()
InfoList = [{node, Node :: node()}, ...]
Reason = {no_continuation, Continuation}
The chunk_info/1
function returns the following pair
describing the chunk continuation returned by
chunk/2,3
, bchunk/2,3
, or chunk_step/3
:
-
{node,
. Terms are read from the disk log running onNode }
.Node
chunk_step(Log, Continuation, Step) ->
{ok, any()} | {error, Reason}
Log = log()
Continuation = start | continuation()
Step = integer()
Reason =
no_such_log |
end_of_log |
{format_external, Log} |
{blocked_log, Log} |
{badarg, continuation} |
{file_error, file:filename(), file_error()}
The function chunk_step
can be used in conjunction
with chunk/2,3
and bchunk/2,3
to search
through an internally formatted wrap log. It takes as
argument a continuation as returned by chunk/2,3
,
bchunk/2,3
, or chunk_step/3
, and steps forward
(or backward)
files in the wrap log. The
continuation returned points to the first log item in the
new current file.
If the atom start
is given as continuation, a disk log
to read terms from is chosen. A local or distributed disk log
on the current node is preferred to an
individual distributed log on some other node.
If the wrap log is not full because all files have not been
used yet, {error, end_of_log}
is returned if trying to
step outside the log.
close(Log) -> ok | {error, close_error_rsn()}
Log = log()
close_error_rsn() =
no_such_log |
nonode |
{file_error, file:filename(), file_error()}
The function close/1
closes a
local or distributed disk log properly. An internally
formatted log must be closed before the Erlang system is
stopped, otherwise the log is regarded as unclosed and the
automatic repair procedure will be activated next time the
log is opened.
The disk log process in not terminated as long as there are
owners or users of the log. It should be stressed that each
and every owner must close the log, possibly by terminating,
and that any other process - not only the processes that have
opened the log anonymously - can decrement the users
counter by closing the log.
Attempts to close a log by a process that is
not an owner are simply ignored if there are no users.
If the log is blocked by the closing process, the log is also unblocked.
format_error(Error) -> io_lib:chars()
Error = term()
Given the error returned by any function in this module,
the function format_error
returns a descriptive string
of the error in English. For file errors, the function
format_error/1
in the file
module is called.
inc_wrap_file(Log) -> ok | {error, inc_wrap_error_rsn()}
Log = log()
inc_wrap_error_rsn() =
no_such_log |
nonode |
{read_only_mode, log()} |
{blocked_log, log()} |
{halt_log, log()} |
{invalid_header, invalid_header()} |
{file_error, file:filename(), file_error()}
invalid_header() = term()
The inc_wrap_file/1
function forces the internally formatted
disk log to start logging to the
next log file. It can be used, for instance, in conjunction with
change_size/2
to reduce the amount of disk space allocated
by the disk log.
The owners that subscribe to notifications will normally
receive a wrap
message, but in case of
an error with a reason tag of invalid_header
or
file_error
an error_status
message will be sent.
info(Log) -> InfoList | {error, no_such_log}
Log = log()
InfoList = [dlog_info()]
dlog_info() =
{name, Log :: log()} |
{file, File :: file:filename()} |
{type, Type :: dlog_type()} |
{format, Format :: dlog_format()} |
{size, Size :: dlog_size()} |
{mode, Mode :: dlog_mode()} |
{owners, [{pid(), Notify :: boolean()}]} |
{users, Users :: integer() >= 0} |
{status,
Status :: ok | {blocked, QueueLogRecords :: boolean()}} |
{node, Node :: node()} |
{distributed, Dist :: local | [node()]} |
{head,
Head ::
none | {head, term()} | (MFA :: {atom(), atom(), list()})} |
{no_written_items, NoWrittenItems :: integer() >= 0} |
{full, Full :: boolean} |
{no_current_bytes, integer() >= 0} |
{no_current_items, integer() >= 0} |
{no_items, integer() >= 0} |
{current_file, integer() >= 1} |
{no_overflows,
{SinceLogWasOpened :: integer() >= 0,
SinceLastInfo :: integer() >= 0}}
The info/1
function returns a list of {Tag, Value}
pairs describing the log. If there is a disk log process running
on the current node, that log is used as source of information,
otherwise an individual distributed log on
some other node is chosen, if such a log exists.
The following pairs are returned for all logs:
-
{name,
, whereLog }
is the name of the log as given by theLog open/1
optionname
. -
{file,
. For halt logsFile }
is the filename, and for wrap logsFile
is the base name.File -
{type,
, whereType }
is the type of the log as given by theType open/1
optiontype
. -
{format,
, whereFormat }
is the format of the log as given by theFormat open/1
optionformat
. -
{size,
, whereSize }
is the size of the log as given by theSize open/1
optionsize
, or the size set bychange_size/2
. The value set bychange_size/2
is reflected immediately. -
{mode,
, whereMode }
is the mode of the log as given by theMode open/1
optionmode
. -
{owners, [{pid(),
whereNotify }]}
is the value set by theNotify open/1
optionnotify
or the functionchange_notify/3
for the owners of the log. -
{users,
whereUsers }
is the number of anonymous users of the log, see theUsers open/1
option linkto. -
{status,
, whereStatus }
isStatus ok
or{blocked,
as set by the functionsQueueLogRecords }block/1,2
andunblock/1
. -
{node,
. The information returned by the current invocation of theNode }info/1
function has been gathered from the disk log process running on
.Node -
{distributed,
. If the log is local on the current node, thenDist }
has the valueDist local
, otherwise all nodes where the log is distributed are returned as a list.
The following pairs are returned for all logs opened in
read_write
mode:
-
{head,
. Depending of the value of theHead }open/1
optionshead
andhead_func
or set by the functionchange_header/2
, the value of
isHead none
(default),{head, H}
(head
option) or{M,F,A}
(head_func
option). -
{no_written_items,
, whereNoWrittenItems }
is the number of items written to the log since the disk log process was created.NoWrittenItems
The following pair is returned for halt logs opened in
read_write
mode:
-
{full,
, whereFull }
isFull true
orfalse
depending on whether the halt log is full or not.
The following pairs are returned for wrap logs opened in
read_write
mode:
-
{no_current_bytes, integer() >= 0}
is the number of bytes written to the current wrap log file. -
{no_current_items, integer() >= 0}
is the number of items written to the current wrap log file, header inclusive. -
{no_items, integer() >= 0}
is the total number of items in all wrap log files. -
{current_file, integer()}
is the ordinal for the current wrap log file in the range1..MaxNoFiles
, whereMaxNoFiles
is given by theopen/1
optionsize
or set bychange_size/2
. -
{no_overflows, {
, whereSinceLogWasOpened ,SinceLastInfo }}
(SinceLogWasOpened
) is the number of times a wrap log file has been filled up and a new one opened orSinceLastInfo inc_wrap_file/1
has been called since the disk log was last opened (info/1
was last called). The first timeinfo/2
is called after a log was (re)opened or truncated, the two values are equal.
Note that the chunk/2,3
, bchunk/2,3
, and
chunk_step/3
functions do not affect any value
returned by info/1
.
lclose(Log) -> ok | {error, lclose_error_rsn()}
Log = log()
lclose(Log, Node) -> ok | {error, lclose_error_rsn()}
Log = log()
Node = node()
lclose_error_rsn() =
no_such_log | {file_error, file:filename(), file_error()}
The function lclose/1
closes a local log or an
individual distributed log on the current node.
The function lclose/2
closes an individual
distributed log on the specified node if the node
is not the current one.
lclose(
is equivalent to
lclose(
.
See also close/1.
If there is no log with the given name
on the specified node, no_such_log
is returned.
log(Log, Term) -> ok | {error, Reason :: log_error_rsn()}
Log = log()
Term = term()
blog(Log, Bytes) -> ok | {error, Reason :: log_error_rsn()}
log_error_rsn() =
no_such_log |
nonode |
{read_only_mode, log()} |
{format_external, log()} |
{blocked_log, log()} |
{full, log()} |
{invalid_header, invalid_header()} |
{file_error, file:filename(), file_error()}
The log/2
and blog/2
functions synchronously
append a term to a disk log. They return ok
or
{error,
when the term has been written to
disk. If the log is distributed, ok
is always
returned, unless all nodes are down. Terms are written by
means of the ordinary write()
function of the
operating system. Hence, there is no guarantee that the term
has actually been written to the disk, it might linger in
the operating system kernel for a while. To make sure the
item is actually written to disk, the sync/1
function
must be called.
The log/2
function is used for internally formatted logs,
and blog/2
for externally formatted logs.
blog/2
can be used
for internally formatted logs as well provided the binary was
constructed with a call to term_to_binary/1
.
The owners that subscribe to notifications will be notified
of an error with an error_status
message if the error
reason tag is invalid_header
or file_error
.
log_terms(Log, TermList) ->
ok | {error, Resaon :: log_error_rsn()}
Log = log()
TermList = [term()]
blog_terms(Log, BytesList) ->
ok | {error, Reason :: log_error_rsn()}
log_error_rsn() =
no_such_log |
nonode |
{read_only_mode, log()} |
{format_external, log()} |
{blocked_log, log()} |
{full, log()} |
{invalid_header, invalid_header()} |
{file_error, file:filename(), file_error()}
The log_terms/2
and blog_terms/2
functions
synchronously append a list of items to the log. The benefit
of using these functions rather than the log/2
and
blog/2
functions is that of efficiency: the given
list is split into as large sublists as possible (limited by
the size of wrap log files), and each sublist is logged as
one single item, which reduces the overhead.
The log_terms/2
function is used for internally formatted
logs, and blog_terms/2
for externally formatted logs.
blog_terms/2
can be used
for internally formatted logs as well provided the binaries were
constructed with calls to term_to_binary/1
.
The owners that subscribe to notifications will be notified
of an error with an error_status
message if the error
reason tag is invalid_header
or file_error
.
open(ArgL) -> open_ret() | dist_open_ret()
ArgL = dlog_options()
dlog_options() = [dlog_option()]
dlog_option() =
{name, Log :: log()} |
{file, FileName :: file:filename()} |
{linkto, LinkTo :: none | pid()} |
{repair, Repair :: true | false | truncate} |
{type, Type :: dlog_type} |
{format, Format :: dlog_format()} |
{size, Size :: dlog_size()} |
{distributed, Nodes :: [node()]} |
{notify, boolean()} |
{head, Head :: dlog_head_opt()} |
{head_func, MFA :: {atom(), atom(), list()}} |
{mode, Mode :: dlog_mode()}
open_ret() = ret() | {error, open_error_rsn()}
ret() =
{ok, Log :: log()} |
{repaired,
Log :: log(),
{recovered, Rec :: integer() >= 0},
{badbytes, Bad :: integer() >= 0}}
dist_open_ret() =
{[{node(), ret()}], [{node(), {error, dist_error_rsn()}}]}
dist_error_rsn() = nodedown | open_error_rsn()
open_error_rsn() =
no_such_log |
{badarg, term()} |
{size_mismatch,
CurrentSize :: dlog_size(),
NewSize :: dlog_size()} |
{arg_mismatch,
OptionName :: dlog_optattr(),
CurrentValue :: term(),
Value :: term()} |
{name_already_open, Log :: log()} |
{open_read_write, Log :: log()} |
{open_read_only, Log :: log()} |
{need_repair, Log :: log()} |
{not_a_log_file, FileName :: file:filename()} |
{invalid_index_file, FileName :: file:filename()} |
{invalid_header, invalid_header()} |
{file_error, file:filename(), file_error()} |
{node_already_open, Log :: log()}
dlog_optattr() =
name |
file |
linkto |
repair |
type |
format |
size |
distributed |
notify |
head |
head_func |
mode
dlog_size() =
infinity |
integer() >= 1 |
{MaxNoBytes :: integer() >= 1, MaxNoFiles :: integer() >= 1}
The
parameter is a list of options which have
the following meanings:
-
{name,
specifies the name of the log. This is the name which must be passed on as a parameter in all subsequent logging operations. A name must always be supplied.Log } -
{file,
specifies the name of the file which will be used for logged terms. If this value is omitted and the name of the log is either an atom or a string, the file name will default toFileName }lists:concat([
for halt logs. For wrap logs, this will be the base name of the files. Each file in a wrap log will be calledLog , ".LOG"])<base_name>.N
, whereN
is an integer. Each wrap log will also have two files called<base_name>.idx
and<base_name>.siz
. -
{linkto,
. IfLinkTo }
is a pid, that pid becomes an owner of the log. IfLinkTo
isLinkTo none
the log records that it is used anonymously by some process by incrementing theusers
counter. By default, the process which callsopen/1
owns the log. -
{repair,
. IfRepair }
isRepair true
, the current log file will be repaired, if needed. As the restoration is initiated, a message is output on the error log. Iffalse
is given, no automatic repair will be attempted. Instead, the tuple{error, {need_repair,
is returned if an attempt is made to open a corrupt log file. IfLog }}truncate
is given, the log file will be truncated, creating an empty log. Default istrue
, which has no effect on logs opened in read-only mode. -
{type,
is the type of the log. Default isType }halt
. -
{format,
specifies the format of the disk log. Default isFormat }internal
. -
{size,
specifies the size of the log. When a halt log has reached its maximum size, all attempts to log more items are rejected. The default size isSize }infinity
, which for halt implies that there is no maximum size. For wrap logs, the
parameter may be either a pairSize {
orMaxNoBytes ,MaxNoFiles }infinity
. In the latter case, if the files of an already existing wrap log with the same name can be found, the size is read from the existing wrap log, otherwise an error is returned. Wrap logs write at most
bytes on each file and useMaxNoBytes
files before starting all over with the first wrap log file. Regardless ofMaxNoFiles
, at least the header (if there is one) and one item is written on each wrap log file before wrapping to the next file. When opening an existing wrap log, it is not necessary to supply a value for the optionMaxNoBytes Size
, but any supplied value must equal the current size of the log, otherwise the tuple{error, {size_mismatch,
is returned.CurrentSize ,NewSize }} -
{distributed,
. This option can be used for adding members to a distributed disk log. The default value isNodes }[]
, which means that the log is local on the current node. -
{notify, bool()}
. Iftrue
, the owners of the log are notified when certain events occur in the log. Default isfalse
. The owners are sent one of the following messages when an event occurs:-
{disk_log, Node, Log, {wrap, NoLostItems}}
is sent when a wrap log has filled up one of its files and a new file is opened.NoLostItems
is the number of previously logged items that have been lost when truncating existing files. -
{disk_log, Node, Log, {truncated, NoLostItems}}
is sent when a log has been truncated or reopened. For halt logsNoLostItems
is the number of items written on the log since the disk log process was created. For wrap logsNoLostItems
is the number of items on all wrap log files. -
{disk_log, Node, Log, {read_only, Items}}
is sent when an asynchronous log attempt is made to a log file opened in read-only mode.Items
is the items from the log attempt. -
{disk_log, Node, Log, {blocked_log, Items}}
is sent when an asynchronous log attempt is made to a blocked log that does not queue log attempts.Items
is the items from the log attempt. -
{disk_log, Node, Log, {format_external, Items}}
is sent whenalog/2
oralog_terms/2
is used for internally formatted logs.Items
is the items from the log attempt. -
{disk_log, Node, Log, full}
is sent when an attempt to log items to a wrap log would write more bytes than the limit set by thesize
option. -
{disk_log, Node, Log, {error_status, Status}}
is sent when the error status changes. The error status is defined by the outcome of the last attempt to log items to a the log or to truncate the log or the last use ofsync/1
,inc_wrap_file/1
orchange_size/2
.Status
is one ofok
and{error, Error}
, the former being the initial value.
-
-
{head,
specifies a header to be written first on the log file. If the log is a wrap log, the itemHead }
is written first in each new file.Head
should be a term if the format isHead internal
, and a deep list of bytes (or a binary) otherwise. Default isnone
, which means that no header is written first on the file. -
{head_func, {M,F,A}}
specifies a function to be called each time a new log file is opened. The callM:F(A)
is assumed to return{ok, Head}
. The itemHead
is written first in each file.Head
should be a term if the format isinternal
, and a deep list of bytes (or a binary) otherwise. -
{mode,
specifies if the log is to be opened in read-only or read-write mode. It defaults toMode }read_write
.
The open/1
function returns {ok,
if the
log file was successfully opened. If the file was
successfully repaired, the tuple {repaired,
is returned, where
is the number of whole Erlang terms found in the
file and
is the number of bytes in the file which
were non-Erlang terms. If the distributed
parameter
was given, open/1
returns a list of
successful replies and a list of erroneous replies. Each
reply is tagged with the node name.
When a disk log is opened in read-write mode, any existing
log file is checked for. If there is none a new empty
log is created, otherwise the existing file is opened at the
position after the last logged item, and the logging of items
will commence from there. If the format is internal
and the existing file is not recognized as an internally
formatted log, a tuple {error, {not_a_log_file,
is returned.
The open/1
function cannot be used for changing the
values of options of an already open log; when there are prior
owners or users of a log, all option values except name
,
linkto
and notify
are just checked against
the values that have been supplied before as option values
to open/1
, change_header/2
, change_notify/3
or change_size/2
. As a consequence,
none of the options except name
is mandatory. If some
given value differs from the current value, a tuple
{error, {arg_mismatch,
is returned. Caution: an owner's attempt to open a log
as owner once again is acknowledged with the return value
{ok,
, but the state of the disk log is not
affected in any way.
If a log with a given name is local on some node,
and one tries to open the log distributed on the same node,
then the tuple {error, {node_already_open,
is
returned. The same tuple is returned if the log is distributed on
some node, and one tries to open the log locally on the same node.
Opening individual distributed disk logs for the first time
adds those logs to a (possibly empty) distributed disk log.
The option values supplied are used
on all nodes mentioned by the distributed
option.
Individual distributed logs know nothing
about each other's option values, so each node can be
given unique option values by creating a distributed
log with several calls to open/1
.
It is possible to open a log file more than once by giving
different values to the option name
or by using the
same file when distributing a log on different nodes.
It is up to the user of the disk_log
module to ensure that no more than one
disk log process has write access to any file, or the
the file may be corrupted.
If an attempt to open a log file for the first time fails,
the disk log process terminates with the EXIT message
{{failed,Reason},[{disk_log,open,1}]}
.
The function returns {error, Reason}
for all other errors.
pid2name(Pid) -> {ok, Log} | undefined
Pid = pid()
Log = log()
The pid2name/1
function returns the name of the log
given the pid of a disk log process on the current node, or
undefined
if the given pid is not a disk log process.
This function is meant to be used for debugging only.
reopen(Log, File) -> ok | {error, reopen_error_rsn()}
Log = log()
File = file:filename()
reopen(Log, File, Head) -> ok | {error, reopen_error_rsn()}
Log = log()
File = file:filename()
Head = term()
breopen(Log, File, BHead) -> ok | {error, reopen_error_rsn()}
Log = log()
File = file:filename()
BHead = bytes()
reopen_error_rsn() =
no_such_log |
nonode |
{read_only_mode, log()} |
{blocked_log, log()} |
{same_file_name, log()} |
{invalid_index_file, file:filename()} |
{invalid_header, invalid_header()} |
{file_error, file:filename(), file_error()}
The reopen
functions first rename the log file
to
and then re-create a new log file.
In case of a wrap log,
is used as the base name
of the renamed files.
By default the header given to open/1
is written first in
the newly opened log file, but if the
or the
argument is given, this item is used instead.
The header argument is used once only; next time a wrap log file
is opened, the header given to open/1
is used.
The reopen/2,3
functions are used for internally formatted
logs, and breopen/3
for externally formatted logs.
The owners that subscribe to notifications will receive
a truncate
message.
Upon failure to reopen the log, the disk log process terminates
with the EXIT message {{failed,Error},[{disk_log,Fun,Arity}]}
,
and other processes that have requests queued receive the message
{disk_log, Node, {error, disk_log_stopped}}
.
sync(Log) -> ok | {error, sync_error_rsn()}
Log = log()
sync_error_rsn() =
no_such_log |
nonode |
{read_only_mode, log()} |
{blocked_log, log()} |
{file_error, file:filename(), file_error()}
The sync/1
function ensures that the contents of the
log are actually written to the disk.
This is usually a rather expensive operation.
truncate(Log) -> ok | {error, trunc_error_rsn()}
Log = log()
truncate(Log, Head) -> ok | {error, trunc_error_rsn()}
Log = log()
Head = term()
btruncate(Log, BHead) -> ok | {error, trunc_error_rsn()}
trunc_error_rsn() =
no_such_log |
nonode |
{read_only_mode, log()} |
{blocked_log, log()} |
{invalid_header, invalid_header()} |
{file_error, file:filename(), file_error()}
The truncate
functions remove all items from a disk log.
If the
or the
argument is
given, this item is written first in the newly truncated
log, otherwise the header given to open/1
is used.
The header argument is only used once; next time a wrap log file
is opened, the header given to open/1
is used.
The truncate/1,2
functions are used for internally
formatted logs, and btruncate/2
for externally formatted
logs.
The owners that subscribe to notifications will receive
a truncate
message.
If the attempt to truncate the log fails, the disk log process
terminates with the EXIT message
{{failed,Reason},[{disk_log,Fun,Arity}]}
, and
other processes that have requests queued receive the message
{disk_log, Node, {error, disk_log_stopped}}
.
unblock(Log) -> ok | {error, unblock_error_rsn()}
Log = log()
The unblock/1
function unblocks a log.
A log can only be unblocked by the blocking process.