Edition 1
1801 Varsity Drive
Raleigh, NC 27606-2072 USA
Phone: +1 919 754 3700
Phone: 888 733 4281
Fax: +1 919 754 3701
Mono-spaced Bold
To see the contents of the filemy_next_bestselling_novel
in your current working directory, enter thecat my_next_bestselling_novel
command at the shell prompt and press Enter to execute the command.
Press Enter to execute the command.Press Ctrl+Alt+F2 to switch to the first virtual terminal. Press Ctrl+Alt+F1 to return to your X-Windows session.
mono-spaced bold
. For example:
File-related classes includefilesystem
for file systems,file
for files, anddir
for directories. Each class has its own associated set of permissions.
Choose Mouse Preferences. In the Buttons tab, click the Left-handed mouse check box and click to switch the primary mouse button from the left to the right (making the mouse suitable for use in the left hand).→ → from the main menu bar to launchTo insert a special character into a gedit file, choose → → from the main menu bar. Next, choose → from the Character Map menu bar, type the name of the character in the Search field and click . The character you sought will be highlighted in the Character Table. Double-click this highlighted character to place it in the Text to copy field and then click the button. Now switch back to your document and choose → from the gedit menu bar.
Mono-spaced Bold Italic
or Proportional Bold Italic
To connect to a remote machine using ssh, typessh
at a shell prompt. If the remote machine isusername
@domain.name
example.com
and your username on that machine is john, typessh john@example.com
.Themount -o remount
command remounts the named file system. For example, to remount thefile-system
/home
file system, the command ismount -o remount /home
.To see the version of a currently installed package, use therpm -q
command. It will return a result as follows:package
.
package-version-release
Publican is a DocBook publishing system.
mono-spaced roman
and presented thus:
books Desktop documentation drafts mss photos stuff svn books_tests Desktop1 downloads images notes scripts svgs
mono-spaced roman
but add syntax highlighting as follows:
package org.jboss.book.jca.ex1; import javax.naming.InitialContext; public class ExClient { public static void main(String args[]) throws Exception { InitialContext iniCtx = new InitialContext(); Object ref = iniCtx.lookup("EchoBean"); EchoHome home = (EchoHome) ref; Echo echo = home.create(); System.out.println("Created Echo"); System.out.println("Echo.echo('Hello') = " + echo.echo("Hello")); } }
gcc
, gdb
, etc) and Eclipse offer two distinct approaches to programming. Most traditional Linux tools are far more flexible, subtle, and (in aggregate) more powerful than their Eclipse-based counterparts. These traditional Linux tools, on the other hand, are more difficult to master, and offer more capabilities than are required by most programmers or projects. Eclipse, by contrast, sacrifices some of these benefits in favor of an integrated environment, which in turn is suitable for users who prefer their tools accessible in a single, graphical interface.
CDT
) and Java (JDT
). These toolkits provide a set of integrated tools specific to their respective languages. Both toolkits supply Eclipse GUI interfaces with the required tools for editing, building, running, and debugging source code.
CDT
and JDT
also provide multiple editors for a variety of file types used in a project. For example, the CDT
supplies different editors specific for C/C++ header files and source files, along with a Makefile
editor.
CDT
source file editor, for example, provides error parsing in the context of a single file, but some errors may only be visible when a complete project is built. Other common features among toolkit-supplied editors are colorization, code folding, and automatic indentation. In some cases, other plug-ins provide advanced editor features such as automatic code completion, hover help, and contextual search; a good example of such a plug-in is libhover
, which adds these extended features to C/C++ editors (refer to Section 2.2.2, “libhover Plug-in” for more information).
Autotools
plug-in, for example, allows you to add portability to a C/C++ project, allowing other developers to build the project in a wide variety of environments (for more information, refer to Section 5.3, “Autotools”).
CDT
and JDT
. For more information on either toolkit, refer to the Java Development User Guide or C/C++ Development User Guide in the Eclipse .
.c
) for most types of source files. To configure the settings for the Editor
, navigate to > > > .
//FIXME
or //TODO
tags. Tracked comments—i.e. task tags—are different for source files written in other languages. To add or configure task tags, navigate to > and use the keyword task tags
to display the task tag configuration menus for specific editors/languages.
libhover
plug-in for Eclipse provides plug-and-play hover help support for the GNU C Library and GNU C++ Standard Library. This allows developers to refer to existing documentation on glibc
and libstdc++
libraries within the Eclipse IDE in a more seamless and convenient manner via hover help and code completion.
libhover
needs to index the file using the CDT indexer. Indexing parses the given file in context of a build; the build context determines where header files come from and how types, macros, and similar items are resolved. To be able to index a C++ source file, libhover
usually requires you to perform an actual build first, although in some cases it may already know where the header files are located.
libhover
plug-in may need indexing for C++ sources because a C++ member function name is not enough information to look up its documentation. For C++, the class name and parameter signature of the function is also required to determine exactly which member is being referenced. This is because C++ allows different classes to have members of the same name, and even within a class, members may have the same name but with different method signatures.
include
files; libhover
can only do this via indexing.
libhover
does not need to index C source files in order to provide hover help or code completion. Simply choose an appropriate C header file to be included for a selection.
libhover
libraries is enabled by default, and it can be disabled per project. To disable or enable hover help for a particular project, right-click the project name and click . On the menu that appears, navigate to > . Check or uncheck a library in the section to enable or disable hover help for that particular library.
libhover
libraries overlap in functionality. For example, the newlib
library (whose libhover
library plug-in is supported in Red Hat Enterprise Linux 6) contains functions whose names overlap with those in the GNU C library (provided by default); having libhover
plugins for both newlib
and glibc
installed would mean having to disable one.
libhover
libraries libraries are enabled and there exists a functional overlap between libraries, the Help content for the function from the first listed library in the section will appear in hover help (i.e. in Figure 2.18, “Enabling/Disabling Hover Help”, glibc
). For code completion, libhover
will offer all possible alternatives from all enabled libhover
libraries.
libhover
will display library documentation on the selected C function or C++ member function.
admin
command to remove stall locked files and back out the changed files. The second option is to reissue the commit command.
:local:
access method) then the client spawns the server on its behalf. If the repository is on a remote machine, the server can be started with rsh/SSH (CVS_RHS
environment variable) by a client or by an inet daemon (/etc/xinetd.d/cvs
) and different authentication methods (:gserver:
access method integrates Kerberos authentication, for example) can be used.
$CVSROOT
directory (using an absolute path to signal :local:
access method), importing sources from $SOURCES
, checking them out from the repository into $WORKDIR
, modifying some files, and committing the changes.
$ mkdir "$CVSROOT" $ cvs -d "$CVSROOT" init
CVSROOT
subdirectory under $CVSROOT
with repositories configuration.
$SOURCES
directory into CVS as $REPOSITORY
, tagged with $VENDOR_TAG
and $RELEASE_TAG
with a commit $MESSAGE
.
$ cd "$SOURCES" $ cvs -d "$CVSROOT" import -m "$MESSAGE" "$REPOSITORY" \ "$VENDOR_TAG" "$RELEASE_TAG"
$SOURCES
content should be imported into CVS under $CVSROOT/$REPOSITORY
. It is possible to have more repositories in one CVS storage, though this example just uses the one. The $VENDOR_TAG
and $RELEASE_TAG
are tags for implicit repository branches.
$WORKDIR
.
$ cd "$WORKDIR" $ cvs -d "$CVSROOT" checkout "$REPOSITORY"
$SOURCES
. This could cause data corruption on the client side and CVS will print errors on various CVS invocations.
$REPOSITORY
subdirectory. The developer can also check out multiple repositories from one server.
$ cd $REPOSITORY
$FILE
use:
$ vi "$FILE" $ cvs add "$FILE"
$EXISTING_FILE
.
$ vi "$EXISTING_FILE"
$COMMIT_MESSAGE
.
$ cvs commit -m "$COMMIT_MESSAGE"
$CVSROOT
value as a CVSROOT
environment variable and the cvs
tool will respect it. This can free the developer from needing to repetitively supply the -d "$CVSROOT"
option. The value is stored in the CVS helper subdirectory at initial check-out, and the CVS tool takes the value from there automatically.
man cvs
.
/usr/share/doc/cvs-*/FAQ
.
yum install subversion
. Selecting this option ensures that only Red Hat certified packages are used and removes the need to manually update them.
install
file.
# mkdir /var/svn # svnadmin create /var/svn/repos # ls /var/svn/repos/ conf db format hooks locks README.txt
/var/svn/repos
with the required database files.
svnserver
server
branches
, tags
, and trunk
. This is not required by SVN but it is a popular convention. The trunk
directory should contain the projects files, and the branches
and tags
directories should remain empty. For example:
myproject
/branches/myproject
/tags/myproject
/trunkfoo.c
bar.c
Makefile
svn import
command. For example:
$ svn import /path
/to
/mytree
\ http://host.example.com
/svn/repo/myproject
\ -m "Initial import" Addingmyproject/foo.c
Addingmyproject/bar.c
Addingmyproject/subdir
Addingmyproject/subdir/quux.h
Committed revision 1. $
$ svn list http://host.example.com
/svn/repo/myproject
svn checkout
command. For example:
$ svn checkouthttp://host.example.com/svn/repo/trunk
Atrunk/README
Atrunk/INSTALL
Atrunk/src/main.c
Atrunk/src/header.h
... Checked out revision8810
. $
$ svn checkout http://host.example.com/svn/repo/trunk my-working-copy
.svn
. Being an administrative directory, it will not usually appear with a list command. This is an important file and should not be deleted or changed. Subversion uses this directory to manage the working copy and tampering with it will cause errors and instability. If the directory is accidentally deleted the best way to fix it is to delete the entire containing directory (a normal system delete, not svn delete
) and run svn update
from a parent directory. The deleted directory will be recreated, including the missing or changed .svn
directory. This can cause a loss of data.
svn add filename
svn delete filename
svn copy filename1 filename2
filename2
, which is an exact copy of filename1
. It then schedules filename2
for addition on the next commit. Note that svn copy
does not create intermediate directories unless the --parents
option is passed.
svn move filename1 filename2
svn copy filename1 filename2
followed by svn delete filename1
. A copy is made, and then filename1
is scheduled to be deleted on the next commit. Note that svn move
does not create intermediate directories unless the --parents
option is passed.
svn mkdir directory
svn mkdir
, svn copy
, svn move
, and svn delete
directly on the repository URL. The downside of using this is that with a working copy the changes can be checked before publishing them to ensure that is actually the way they were intended.
svn commit
and svn update
will do nothing.
svn commit
will update the repository with the changes in the working directory; running svn update
will do nothing.
svn commit
will do nothing; running svn update
will merge the changes in the repository with the local working copy.
svn commit
is run first, an 'out-of-date' error will occur. Update the file first. Running svn update
will attempt to merge the changes in the repository with those on the working copy. If there are conflicts SVN will provide options for the user to decide on the best course of action to resolve them.
svn status
will display all of the files in the working tree that do not match the current version in the repository, coded by a letter.
--verbose
(-v
) is passed with svn status
, the status of every item in the working copy will be displayed, even those that have not been changed. For example:
$ svn status -v M 44 23 sally README 44 30 sally INSTALL M 44 20 harry bar.c 44 18 ira stuff 44 35 harry stuff/trout.c D 44 19 ira stuff/fish.c 44 21 sally stuff/things A 0 ? ? stuff/things/bloo.h 44 36 harry stuff/things/gloo.c
--show-updates
(-u
) with svn status
. An asterisk (*) will be displayed between the letter status and the working revision number on any files that will be updated when performing an svn commit
.
svn diff
command. This displays changes in a unified diff format, describing changes as 'snippets' of a file's content where each line is prefixed with a character: a space for no change, a minus sign (-) for a line removed, and a plus sign (+) for an added line.
svn commit
command, appending the option -m
:
$ svn commitfilename
-m "Fixed a typo infilename
" Sendingfilename
Transmitting file data . Committed revision57
. $
svn --help
provides information on the available commands to be used in conjunction with SVN and svn subcommand
--help
provides more detailed information on the specified subcommand.
curl
, zlib
, openssl
, expat
, and libiconv
.
$ sudo yum install curl-devel expat-devel gettext-devel \ openssl-devel zlib-devel gcc
$ tar -zxf git-1.7.6.1.tar.gz $ cd git-1.7.2.2 $ make prefix=/usr/local $ sudo make prefix=/usr/local install
$ git clone git://git.kernel.org/pub/scm/git/git.git
$ yum install git
/etc/gitconfig
file contains variables for every user on the system and all their repositories. It holds the base settings and passing --system
to git config
sets it to read and write from this file.
~/.gitconfig
file is specific to the user. Passing --global
tells Git to read and write to this file, overriding the settings made in the first point.
.git/config
) of the repository currently being used. This is specific to this repository only and override the settings in both the first and the second point.
$ git config --global user.name "John Smith
" $ git config --global user.email "jsmith
@example.com
"
--global
option this only needs to be set once, but can be overridden for specific repositories.
git config --global core.editor EditorName
$ git config --global merge.tool DiffTool
$ git config --list user.name=John Smith
user.email=jsmith@example.com
git config {key}
command. For example:
$ git config user.name
John Smith
$ git clone git://location/of/git/repository
.git
git clone
as opposed to git checkout
as it might be for a version control system similar to CVS and SVN. This is because Git receives a copy of every file in the project's entire history, as opposed to only the most recent files as with other version control systems.
.git
suffix removed. However, the clone
command can use any other name simply by appending the desired directory name to the end:
$ git clone git://location/of/git/repository
.gitmy_git_repo
git://
protocol, it is also possible to use http://
or https://
as appropriate.
$ git init
git status
command. For a newly started project there will be files in the untracked status.
$ git status
# On branch master
# Untracked files:
# (use "git add <file>..." to include in what will be committed)
# filename
nothing added to commit but untracked files present (use "git add" to track)
git add
command.
$ git add filename
git add filename
will add that specific file first to the unmodified section. Use git add .
to add all files in the current directory (including any sub-directories), or for example git add *.[ch]
to add all .c
and .h
files in the current directory.
$ git rmrm '
filename
' $ git status # On branch master # # Changes to be committed: # (use "git reset HEAD <file>..." to unstage) # # deleted:
filename
#
filename
git rm filename
will remove the entire file. It is only when a file has uncommitted changes that git rm filename
will give a diagnostic and not remove it. To remove a file despite the uncommitted changes, use the --force
or -f
option.
--cached
option, then commit the removal.
$ git rm --cachedfilename
$ git commit -m'remove file message
'
git status
will show this:
$ git status
# On branch master
# Changed but not updated:
# (use "git add <file>..." to update what will be committed)
#
# modified: filename
#
git add filename
command again will push the modified file to the staged status, ready to be committed.
$ git addfilename
$ git status # On branch master # Changes to be committed: # (use "git reset HEAD <file>..." to unstage) # # new file:filename
#
$ git status # On branch master # Changes to be committed: # (use "git reset HEAD <file>..." to unstage) # # modified:filename1
# # Changed but not updated: # (use "git add <file>..." to unstage) # # modified:filename1
#
git add
again will resolve this and the modified snapshot of the file will merge with the snapshot on the staged status, ready to commit the new changes.
git diff
comes in.
$ git diff diff --git a/filename
b/filename
index3cb747f..da65585 100644
--- a/filename
+++ b/filename
@@ -36,6
+36,10
@@ def main @commit.parents[0].parents[0].parents[0] end +some code
+some more code
+a comment
+another change
-a mistake
git diff
command with no parameters, as above, compares the working directory to what is in the staged status, displaying changes made but not yet committed.
--cached
option.
$ git diff --cached diff --git a/filename
b/filename
new file mode 100644 index 0000000..03902a1 --/dev/null
+++ b/filename
@@ -0,0 +1,5 @@ +file + byname1
,name2
+http://path/to/file
+ +added information to file
--staged
option instead of --cached
.
$ git commit
# Please enter the commit message for your changes. Lines starting # with '#' will be ignored, and an empty message aborts the commit. # On branch master # Changes to be committed: # (use "git reset HEAD <file>..." to unstage) # # new file:filename2
# modified:filename1
~ ~ ~ ".git/COMMIT_EDITMSG" 10L, 283C
-v
option with git commit
. The commit message can be entered here, or it can be entered in the command line with the -m
option:
$ git commit -m "commit message
" [master]: created 4156dc4f: "commit message
" 2 files changed, 3 insertions(+), 1 deletions (-) create mode 100644filename
-a
option with git commit
.
$ git status # On branch master # # Changed but not updated: # # modified:filename
# $ git commit -a -m 'commit message
' [master 16e15c7]commit message
1 files changed, 5 insertions(+), 2 deletions(-)
git add [shortname
] [URL
]
$ git remote addshortname
git://path/to/new/repo
git remote
command, passing the -v
option to display the associated URL as well as just the shortname if desired. Even without adding a new remote repository, if the project was cloned it will list at least the repository it was cloned from.
$ git remote -vrepo-name
git://path/to/new/remote-repository
origin git://path/to/original/remote-repository
git show [remote-repository]
will list the URL as well as the branches Git is tracking for this particular repository.
$ git fetch [remote-name]
git pull
will do the same from the repository the original copy was cloned from.
$ git push remote-repository branch-name
git remote rename original-name new-name
. Keep in mind that this will also change the remote branch names as well. Removing a repository is similar: git remote rm remote-name
.
git log
command. When this is run with no arguments, it lists each commit in reverse chronological order, presenting the time and date of the commit, the SHA-1 checksum, the author's name and email, and the commit message.
git log -p -2
will list the normal log information in addition to the diff reports for the two most recent entries.
git log --stat
. This command will include a list of modified files, how many files were changed, and how many lines were added and removed, followed by a summary of this information, after the commit message.
--shortstat
--stat
option, but this displays only the changed, insertions, and deletions in a commit.
--name-only
--name-status
--abbrev-commit
--relative-date
Tue July 10:53:11 2011 -0700
, it will print 2 weeks ago
.
--graph Display
--since=[date]
--until=[date]
--since
option, this will print the log up until the specified date; that is, everything before the date.
--author name
--committer name
--amend
option.
$ git commit --amend
git add *
was used when the intention was to have two (or more) separate commits. The git status
command provides a hint on how to do this as well:
# (use "git reset HEAD <file>..." to unstage)
git reset HEAD filename
and the file is now reverted to the modified status rather than the staged status.
git status
command comes to the rescue again in the unstaged status:
# (use "git checkout -- <file>..." to discard changes in working directory)
git checkout -- filename
reverts the file. To reiterate the above warning however, this will cause data loss; only use it when it is certain this version of the file is no longer wanted.
man git
. This also provides the commands to access other man pages such as gittutorial(7)
, Everyday Git[1]
, and gitglossary(7)
.
Package Name | Red Hat Enterprise 6 | Red Hat Enterprise 5 | Red Hat Enterprise 4 |
---|---|---|---|
glibc | 2.12 | 2.5 | 2.3 |
libstdc++ | 4.4 | 4.1 | 3.4 |
boost | 1.41 | 1.33 | 1.32 |
java | 1.5 (IBM), 1.6 (IBM, OpenJDK, Oracle Java) | 1.4, 1.5, and 1.6 | 1.4 |
python | 2.6 | 2.4 | 2.3 |
php | 5.3 | 5.1 | 4.3 |
ruby | 1.8 | 1.8 | 1.8 |
httpd | 2.2 | 2.2 | 2.0 |
postgresql | 8.4 | 8.1 | 7.4 |
mysql | 5.1 | 5.0 | 4.1 |
nss | 3.12 | 3.12 | 3.12 |
openssl | 1.0.0 | 0.9.8e | 0.9.7a |
libX11 | 1.3 | 1.0 | |
firefox | 3.6 | 3.6 | 3.6 |
kdebase | 4.3 | 3.5 | 3.3 |
gtk2 | 2.18 | 2.10 | 2.04 |
compat-glibc
packagecompat-glibc
RPM is included with Red Hat Enterprise Linux 6, but it is not a runtime package and therefore not needed for running anything. It is solely a development package, containing header files and dummy libraries for linking. This allows compiling and linking packages to run in older Red Hat Enterprise Linux versions (using compat-gcc-*
against those headers and libraries). Running rpm -qpi compat-glibc-*
will provide some information on how to use this package.
Package Name | Files | Previous RHEL Version | Notes | |||
---|---|---|---|---|---|---|
5 | 4 | |||||
API | ABI | API | ABI | |||
glibc | libc, libm, libdl, libutil, libcrypt | 2 | 2 | 3 | 2 | See notes for RHEL 2 and 3. |
libstdc++ | libstdc++ | 2 | 2 | 3 | 2 | See notes for RHEL 3. |
zlib | libz | 1 | ? | 1 | ? | |
ncurses-libs | libncurses | 1 | ? | 1 | ? | |
nss | libnss3, libssl3 | ? | ? | |||
gtk2 | libgdk-x11-2.0, libgdk_pixbuf-2.0, libgtk-x11-2.0 | 2 | ? | ? | ||
glib2 | libglib-2.0, libgmodule-2.0, libgthread-2.0, | 2 | ? | ? |
Package Name | Files | Previous RHEL Version | |||
---|---|---|---|---|---|
5 | 4 | ||||
API | ABI | API | ABI | ||
boost | libboost_filesystem, libboost_threads | 4 | 4 | 4 | 4 |
openssl | libssl, libcrypto | 4 | 3 | 4 | 4 |
glibc
package contains the GNU C Library. This defines all functions specified by the ISO C standard, POSIX specific features, some Unix derivatives, and GNU-specific extensions. The most important set of shared libraries in the GNU C Library are the standard C and math libraries.
stdio.h
header file defines I/O-specific facilities, while math.h
defines functions for computing mathematical operations.
preadv
preadv64
pwritev
pwritev64
malloc_info
mkostemp
mkostemp64
epoll_pwait
sched_getcpu
accept4
fallocate
fallocate64
inotify_init1
dup3
epoll_create1
pipe2
signalfd
eventfd
eventfd_read
eventfd_write
asprintf
dprintf
obstack_printf
vasprintf
vdprintf
obstack_vprintf
fread
fread_unlocked
open*
mq_open
/usr/share/doc/glibc-version
/NEWS
. All changes as of version 2.12 apply to the GNU C Library in Red Hat Enterprise Linux 6. Some of these changes have also been backported to Red Hat Enterprise Linux 5 versions of glibc
.
libstdc++
package contains the GNU C++ Standard Library, which is an ongoing project to implement the ISO 14882 Standard C++ library.
libstdc++
package will provide just enough to satisfy link dependencies (i.e. only shared library files). To make full use of all available libraries and header files for C++ development, you must install libstdc++-devel
as well. The libstdc++-devel
package also contains a GNU-specific implementation of the Standard Template Library (STL).
libstdc++
. However, this is not the case for Red Hat Enterprise Linux 2 and 3. For Red Hat Enterprise Linux 2 compat-libstdc++-296
needs to be installed. For Red Hat Enterprise Linux 3 compat-libstdc++-33
needs to be installed. Neither of these are installed by default so need to be added separately.
<tr1/array>
<tr1/complex>
<tr1/memory>
<tr1/functional>
<tr1/random>
<tr1/regex>
<tr1/tuple>
<tr1/type_traits>
<tr1/unordered_map>
<tr1/unordered_set>
<tr1/utility>
<tr1/cmath>
<array>
<chrono>
<condition_variable>
<forward_list>
<functional>
<initalizer_list>
<mutex>
<random>
<ratio>
<regex>
<system_error>
<thread>
<tuple>
<type_traits>
<unordered_map>
<unordered_set>
-fvisibility
command.
__gnu_cxx::typelist
__gnu_cxx::throw_allocator
libstdc++
in Red Hat Enterprise Linux 6, refer to the C++ Runtime Library section of the following documents:
man
pages for library components, install the libstdc++-docs
package. This will provide man
page information for nearly all resources provided by the library; for example, to view information about the vector
container, use its fully-qualified component name:
man std::vector
std::vector(3) std::vector(3) NAME std::vector - A standard container which offers fixed time access to individual elements in any order. SYNOPSIS Inherits std::_Vector_base< _Tp, _Alloc >. Public Types typedef _Alloc allocator_type typedef __gnu_cxx::__normal_iterator< const_pointer, vector > const_iterator typedef _Tp_alloc_type::const_pointer const_pointer typedef _Tp_alloc_type::const_reference const_reference typedef std::reverse_iterator< const_iterator >
libstdc++-docs
package also provides manuals and reference information in HTML form at the following directory:
file:///usr/share/doc/libstdc++-docs-version
/html/spine.html
boost
package contains a large number of free peer-reviewed portable C++ source libraries. These libraries are suitable for tasks such as portable file-systems and time/date abstraction, serialization, unit testing, thread creation and multi-process synchronization, parsing, graphing, regular expression manipulation, and many others.
boost
package will provide just enough libraries to satisfy link dependencies (i.e. only shared library files). To make full use of all available libraries and header files for C++ development, you must install boost-devel
as well.
boost
package is actually a meta-package, containing many library sub-packages. These sub-packages can also be installed in an a la carte fashion to provide finer inter-package dependency tracking. The meta-package includes all of the following sub-packages:
boost-date-time
boost-filesystem
boost-graph
boost-iostreams
boost-math
boost-program-options
boost-python
boost-regex
boost-serialization
boost-signals
boost-system
boost-test
boost-thread
boost-wave
boost-openmpi
boost-openmpi-devel
boost-graph-openmpi
boost-openmpi-python
boost-mpich2
boost-mpich2-devel
boost-graph-mpich2
boost-mpich2-python
boost-static
package will install the necessary static libraries. Both thread-enabled and single-threaded libraries are provided.
boost
package have changed. As noted above, the monolithic boost
package has been augmented by smaller, more discrete sub-packages. This allows for more control of dependencies by users, and for smaller binary packages when packaging a custom application that uses Boost.
mt
suffix, as per the usual Boost convention.
boost-doc
package provides manuals and reference information in HTML form located in the following directory:
file:///usr/share/doc/boost-doc-version
/index.html
qt
package provides the Qt (pronounced "cute") cross-platform application development framework used in the development of GUI programs. Aside from being a popular "widget toolkit", Qt is also used for developing non-GUI programs such as console tools and servers. Qt was used in the development of notable projects such as Google Earth, KDE, Opera, OPIE, VoxOx, Skype, VLC media player and VirtualBox. It is produced by Nokia's Qt Development Frameworks division, which came into being after Nokia's acquisition of the Norwegian company Trolltech, the original producer of Qt, on June 17, 2008.
qt-doc
package provides HTML manuals and references located in /usr/share/doc/qt4/html/
. This package also provides the Qt Reference Documentation, which is an excellent starting point for development within the Qt framework.
qt-demos
and qt-examples
. To get an overview of the capabilities of the Qt framework, refer to /usr/bin/qtdemo-qt4
(provided by qt-demos
).
kdelibs-devel
package provides the KDE libraries, which build on Qt to provide a framework for making application development easier. The KDE development framework also helps provide consistency across the KDE desktop environment.
kspell2
in KDE4.
key3.db
and cert8.db
are also replaced with new SQL databases called key4.db
and cert9.db
. These new databases will store PKCS #11 token objects, which are the same as what is currently stored in cert8.db
and key3.db
.
/etc/pki/nssdb
where globally trusted CA certificates become accessible to all applications. The command rv = NSS_InitReadWrite("sql:/etc/pki/nssdb");
initializes NSS for applications. If the application is run with root privileges, then the system-wide database is available on a read and write basis. However, if it is run with normal user privileges it becomes read only.
python
package adds support for the Python programming language. This package provides the object and cached bytecode files needed to enable runtime support for basic Python programs. It also contains the python
interpreter and the pydoc
documentation tool. The python-devel
package contains the libraries and header files needed for developing Python extensions.
python
-related packages. By convention, the names of these packages have a python
prefix or suffix. Such packages are either library extensions or python bindings to an existing library. For instance, dbus-python
is a Python language binding for D-Bus.
*
.pyc
/*
.pyo
files) and compiled extension modules (*
.so
files) are incompatible between Python 2.4 (used in Red Hat Enterprise Linux 5) and Python 2.6 (used in Red Hat Enterprise Linux 6). As such, you will need to rebuild any extension modules you use that are not part of Red Hat Enterprise Linux.
man python
. You can also install python-docs
, which provides HTML manuals and references in the following location:
file:///usr/share/doc/python-docs-version
/html/index.html
pydoc component_name
. For example, pydoc math
will display the following information about the math
Python module:
Help on module math: NAME math FILE /usr/lib64/python2.6/lib-dynload/mathmodule.so DESCRIPTION This module is always available. It provides access to the mathematical functions defined by the C standard. FUNCTIONS acos[...] acos(x) Return the arc cosine (measured in radians) of x. acosh[...] acosh(x) Return the hyperbolic arc cosine (measured in radians) of x. asin(...) asin(x) Return the arc sine (measured in radians) of x. asinh[...] asinh(x) Return the hyperbolic arc sine (measured in radians) of x.
java-1.6.0-openjdk
package adds support for the Java programming language. This package provides the java
interpreter. The java-1.6.0-openjdk-devel
package contains the javac
compiler, as well as the libraries and header files needed for developing Java extensions.
java
-related packages. By convention, the names of these packages have a java
prefix or suffix.
man java
. Some associated utilities also have their own respective man
pages.
javadoc
suffix (e.g. dbus-java-javadoc
).
ruby
package provides the Ruby interpreter and adds support for the Ruby programming language. The ruby-devel
package contains the libraries and header files needed for developing Ruby extensions.
ruby
-related packages. By convention, the names of these packages have a ruby
or rubygem
prefix or suffix. Such packages are either library extensions or Ruby bindings to an existing library.
ruby
-related packages include:
file:///usr/share/doc/ruby-version
/NEWS
file:///usr/share/doc/ruby-version
/NEWS-version
man ruby
. You can also install ruby-docs
, which provides HTML manuals and references in the following location:
file:///usr/share/doc/ruby-docs-version
/
perl
package adds support for the Perl programming language. This package provides Perl core modules, the Perl Language Interpreter, and the PerlDoc tool.
perl-*
prefix. These modules provide stand-alone applications, language extensions, Perl libraries, and external library bindings.
perl-5.10.1
. If you are running an older system, rebuild or alter external modules and applications accordingly in order to ensure optimum performance.
yum
or rpm
from the Red Hat Enterprise Linux repositories. They are installed to /usr/share/perl5
and either /usr/lib/perl5
for 32bit architectures or /usr/lib64/perl5
for 64bit architectures.
cpan
tool provided by the perl-CPAN package to install modules directly from the CPAN website. They are installed to /usr/local/share/perl5
and either /usr/local/lib/perl5
for 32bit architectures or /usr/local/lib64/perl5
for 64bit architectures.
/usr/share/perl5/vendor_perl
and either /usr/lib/perl5/vendor_perl
for 32bit architectures or /usr/lib64/perl5/vendor_perl
for 64bit architectures.
/usr/share/perl5/vendor_perl
and either /usr/lib/perl5/vendor_perl
for 32bit architectures or /usr/lib64/perl5/vendor_perl
for 64bit architectures.
/usr/share/man
directory.
perldoc
tool provides documentation on language and core modules. To learn more about a module, use perldoc module_name. For example, perldoc CGI
will display the following information about the CGI core module:
NAME CGI - Handle Common Gateway Interface requests and responses SYNOPSIS use CGI; my $q = CGI->new; [...] DESCRIPTION CGI.pm is a stable, complete and mature solution for processing and preparing HTTP requests and responses. Major features including processing form submissions, file uploads, reading and writing cookies, query string generation and manipulation, and processing and preparing HTTP headers. Some HTML generation utilities are included as well. [...] PROGRAMMING STYLE There are two styles of programming with CGI.pm, an object-oriented style and a function-oriented style. In the object-oriented style you create one or more CGI objects and then use object methods to create the various elements of the page. Each CGI object starts out with the list of named parameters that were passed to your CGI script by the server. [...]
perldoc -f function_name
. For example, perldoc -f split wil display the following information about the split function:
split /PATTERN/,EXPR,LIMIT split /PATTERN/,EXPR split /PATTERN/ split Splits the string EXPR into a list of strings and returns that list. By default, empty leading fields are preserved, and empty trailing ones are deleted. (If all fields are empty, they are considered to be trailing.) In scalar context, returns the number of fields found. In scalar and void context it splits into the @_ array. Use of split in scalar and void context is deprecated, however, because it clobbers your subroutine arguments. If EXPR is omitted, splits the $_ string. If PATTERN is also omitted, splits on whitespace (after skipping any leading whitespace). Anything matching PATTERN is taken to be a delimiter separating the fields. (Note that the delimiter may be longer than one character.) [...]
gcc
and g++
), run-time libraries (like libgcc
, libstdc++
, libgfortran
, and libgomp
), and miscellaneous other utilities.
-std=c++0x
(which disables GNU extensions) or -std=gnu++0x
.
-mavx
is used.
-Wabi
can be used to get diagnostics indicating where these constructs appear in source code, though it will not catch every single case. This flag is especially useful for C++ code to warn whenever the compiler generates code that is known to be incompatible with the vendor-neutral C++ ABI.
-fabi-version=1
option. This practice is not recommended. Objects created this way are indistinguishable from objects conforming to the current stable ABI, and can be linked (incorrectly) amongst the different ABIs, especially when using new compilers to generate code to be linked with old libraries that were built with tools prior to RHEL4.
ld
(distributed as part of the binutils
package) or in the dynamic loader (ld.so
, distributed as part of the glibc
package) can subtly change the object files that the compiler produces. These changes mean that object files moving to the current release of Red Hat Enterprise Linux from previous releases may loose functionality, behave differently at runtime, or otherwise interoperate in a diminished capacity. Known problem areas include:
ld
--build-id
ld
by default, whereas RHEL5 ld
doesn't recognize it.
as
.cfi_sections
support
.debug_frame
, .eh_frame
or both to be emitted from .cfi*
directives. In RHEL5 only .eh_frame
is emitted.
as
, ld
, ld.so
, and gdb
STB_GNU_UNIQUE
and %gnu_unique_symbol
support
DWARF
standard, and also on new extensions not yet standardized. In RHEL5, tools like as
, ld
, gdb
, objdump
, and readelf
may not be prepared for this new information and may fail to interoperate with objects created with the newer tools. In addition, RHEL5 produced object files do not support these new features; these object files may be handled by RHEL6 tools in a sub-optimal manner.
prelink
.
compat-gcc-34
compat-gcc-34-c++
compat-gcc-34-g77
compat-libgfortran-41
gcc44
as an update. This is a backport of the RHEL6 compiler to allow users running RHEL5 to compile their code with the RHEL6 compiler and experiment with new features and optimizations before upgrading their systems to the next major release. The resulting binary will be forward compatible with RHEL6, so one can compile on RHEL5 with gcc44
and run on RHEL5, RHEL6, and above.
gcc44
compiler will be kept reasonably in step with the GCC 4.4.x that we ship with RHEL6 to ease transition. Though, to get the latest features, it is recommended RHEL6 is used for development. The gcc44
is only provided as an aid in the conversion process.
binutils
and gcc
; doing so will also install several dependencies.
gcc
command. This is the main driver for the compiler. It can be used from the command line to pre-process or compile a source file, link object files and libraries, or perform a combination thereof. By default, gcc
takes care of the details and links in the provided libgcc
library.
CDT
. This presents many advantages, particularly for developers who prefer a graphical interface and fully integrated environment. For more information about compiling in Eclipse, refer to Section 1.3, “
Development Toolkits”.
#include <stdio.h> int main () { printf ("Hello world!\n"); return 0; }
gcc hello.c -o hello
hello
is in the same directory as hello.c
.
hello
binary, i.e. hello
.
#include <iostream> using namespace std; int main(void) { cout << "Hello World!" << endl; return 0; }
g++ hello.cc -o hello
hello
is in the same directory as hello.cc
.
hello
binary, i.e. hello
.
#include <stdio.h> void hello() { printf("Hello world!\n"); }
extern void hello(); int main() { hello(); return 0; }
gcc -c one.c -o one.o
one.o
is in the same directory as one.c
.
gcc -c two.c -o two.o
two.o
is in the same directory as two.c
.
one.o
and two.o
into a single executable with:
gcc one.o two.o -o hello
hello
is in the same directory as one.o
and two.o
.
hello
binary, i.e. hello
.
-mtune=
option to optimize the instruction scheduling, and -march=
option to optimize the instruction selection should be used.
-mtune=
optimizes instruction scheduling to fit your architecture by tuning everything except the ABI and the available instruction set. This option will not chose particular instructions, but instead will tune your program in such a way that executing on a particular architecture will be optimized. For example, if an Intel Core2 CPU will predominantly be used, choose -mtune=core2
. If the wrong choice is made, the program will still run, but not optimally on the given architecture. The architecture on which the program will most likely run should always be chosen.
-march=
optimizes instruction selection. As such, it is important to choose correctly as choosing incorrectly will cause your program to fail. This option selects the instruction set used when generating code. For example, if the program will be run on an AMD K8 core based CPU, choose -march=k8
. Specifying the architecture with this option will imply -mtune=
.
-mtune=
and -march=
commands should only be used for tuning and selecting instructions within a given architecture, not to generate code for a different architecture (also known as cross-compiling). For example, this is not to be used to generate PowerPC code from an Intel 64 and AMD64 platform.
-march=
and -mtune=
, refer to the GCC documentation available here: GCC 4.4.4 Manual: Hardware Models and Configurations
-O2
is a good middle of the road option to generate fast code. It produces the best optimized code when the resulting code size is not large. Use this when unsure what would best suit.
-O3
is preferable. This option produces code that is slightly larger but runs faster because of a more frequent inline of functions. This is ideal for floating point intensive code.
-Os
. This flag also optimizes for size, and produces faster code in situations where a smaller footprint will increase code locality, thereby reducing cache misses.
-frecord-gcc-switches
when compiling objects. This records the options used to build objects into objects themselves. After an object is built, it determines which set of options were used to build it. The set of options are then recorded in a section called .GCC.command.line
within the object and can be examined with the following:
$ gcc -frecord-gcc-switches -O3 -Wall hello.c -o hello $ readelf --string-dump=.GCC.command.line hello String dump of section '.GCC.command.line': [ 0] hello.c [ 8] -mtune=generic [ 17] -O3 [ 1b] -Wall [ 21] -frecord-gcc-switches
Step One
-fprofile-generate
.
Step Two
Step Three
-fprofile-use
.
source.c
to include profiling instrumentation:
gcc source.c -fprofile-generate -O2 -o executable
executable
to gather profiling information:
./executable
source.c
with profiling information gathered in step one:
gcc source.c -fprofile-use -O2 -o executable
-fprofile-dir=DIR
where DIR
is the preferred output directory.
glibc
and libgcc
, and possibly for libstdc++
if the program is a C++ program. On Intel 64 and AMD64, this can be done with:
yum install glibc-devel.i686 libgcc.i686 libstdc++-devel.i686
db4-devel
libraries to build, the 32-bit version of these libraries can be installed with:
yum install db4-devel.i686
.i686
suffix on the x86 platform (as opposed to x86-64
) specifies a 32-bit version of the given package. For PowerPC architectures, the suffix is ppc
(as opposed to ppc64
).
-m32
option can be passed to the compiler and linker to produce 32-bit executables. Provided the supporting 32-bit libraries are installed on the 64-bit system, this executable will be able to run on both 32-bit systems and 64-bit systems.
hello.c
into a 64-bit executable with:
gcc hello.c -o hello64
$ file hello64 hello64: ELF 64-bit LSB executable, x86-64, version 1 (GNU/Linux), dynamically linked (uses shared libs), for GNU/Linux 2.6.18, not stripped $ ldd hello64 linux-vdso.so.1 => (0x00007fff242dd000) libc.so.6 => /lib64/libc.so.6 (0x00007f0721514000) /lib64/ld-linux-x86-64.so.2 (0x00007f0721893000)
file
on a 64-bit executable will include ELF 64-bit
in its output, and ldd
will list /lib64/libc.so.6
as the main C library linked.
hello.c
into a 32-bit executable with:
gcc -m32 hello.c -o hello32
$ file hello32 hello32: ELF 32-bit LSB executable, Intel 80386, version 1 (GNU/Linux), dynamically linked (uses shared libs), for GNU/Linux 2.6.18, not stripped $ ldd hello32 linux-gate.so.1 => (0x007eb000) libc.so.6 => /lib/libc.so.6 (0x00b13000) /lib/ld-linux.so.2 (0x00cd7000)
file
on a 32-bit executable will include ELF 32-bit
in its output, and ldd
will list /lib/libc.so.6
as the main C library linked.
$ gcc -m32 hello32.c -o hello32 /usr/bin/ld: crt1.o: No such file: No such file or directory collect2: ld returned 1 exit status
$ g++ -m32 hello32.cc -o hello32-c++ In file included from /usr/include/features.h:385, from /usr/lib/gcc/x86_64-redhat-linux/4.4.4/../../../../include/c++/4.4.4/x86_64-redhat-linux/32/bits/os_defines.h:39, from /usr/lib/gcc/x86_64-redhat-linux/4.4.4/../../../../include/c++/4.4.4/x86_64-redhat-linux/32/bits/c++config.h:243, from /usr/lib/gcc/x86_64-redhat-linux/4.4.4/../../../../include/c++/4.4.4/iostream:39, from hello32.cc:1: /usr/include/gnu/stubs.h:7:27: error: gnu/stubs-32.h: No such file or directory
-m32
will in not adapt or convert a program to resolve any issues arising from 32/64-bit incompatibilities. For tips on writing portable code and converting from 32-bits to 64-bits, see the paper entitled Porting to 64-bit GNU/Linux Systems in the Proceedings of the 2003 GCC Developers Summit.
man
pages for cpp
, gcc
, g++
, gcj
, and gfortran
.
distcc
package provides this capability.
distcc
distcc-server
man
pages for distcc
and distccd
. The following link also provides detailed information about the development of distcc
:
configure
script. This script runs prior to builds and creates the top-level Makefile
s needed to build the application. The configure
script may perform tests on the current system, create additional files, or run other directives as per parameters provided by the builder.
configure
script from an input file (e.g. configure.ac
)
Makefile
for a project on a specific system
configure.scan
), which can be edited to create a final configure.ac
to be used by autoconf
Development Tools
group package. You can install this package group to install the entire Autotools suite, or simply use yum
to install any tools in the suite as you wish.
git
or mercurial
into Eclipse. As such, Autotools projects that use git
repositories will need to be checked out outside the Eclipse workspace. Afterwards, you can specify the source location for such projects in Eclipse. Any repository manipulation (e.g. commits, updates) will need to be done via the command line.
configure
script. This script tests systems for tools, input files, and other features it can use in order to build the project [2]. The configure
script generates a Makefile
which allows the make
tool to build the project based on the system configuration.
configure
script, first create an input file. Then feed it to an Autotools utility in order to create the configure
script. This input file is typically configure.ac
or Makefile.am
; the former is usually processed by autoconf
, while the latter is fed to automake
.
Makefile.am
input file is available, the automake
utility creates a Makefile
template (i.e. Makefile. in
), which may refer to information collected at configuration time. For example, the Makefile
may need to link to a particular library if and only if that library is already installed. When the configure
script runs, automake
will use the Makefile. in
templates to create a Makefile
.
configure.ac
file is available instead, then autoconf
will automatically create the configure
script based on the macros invoked by configure.ac
. To create a preliminary configure.ac
, use the autoscan
utility and edit the file accordingly.
man
pages for autoconf
, automake
, autoscan
and most tools included in the Autotools suite. In addition, the Autotools community provides extensive documentation on autoconf
and automake
on the following websites:
hello
program:
.spec
files. This plug-in allows users to leverage several Eclipse GUI features in editing .spec
files, such as auto-completion, highlighting, file hyperlinks, and folding.
rpmlint
tool into the Eclipse interface. rpmlint
is a command-line tool that helps developers detect common RPM package errors. The richer visualization offered by the Eclipse interface helps developers quickly detect, view, and correct mistakes reported by rpmlint
.
eclipse-rpm-editor
package. For more information about this plug-in, refer to Specfile Editor User Guide in the Eclipse .
-debuginfo
packages for all architecture-dependent RPMs included in the operating system. A -debuginfo
package contains accurate debugging information for its corresponding package. For Red Hat Enterprise Linux 6, the debuginfo packages are now available on a new channel on the Red Hat Network. To install the -debuginfo
package of a package (i.e. typically packagename
-debuginfo
), first the machine needs to be subscribed to the corresponding Debuginfo channel. For example, for Red Hat Enterprise Server 6, the corresponding channel would be Red Hat Enterprise Linux Server Debuginfo (v. 6)
. Then use the following command:
debuginfo-install packagename
-debuginfo
equivalent installed may fail, although GDB will try to provide any helpful diagnostics it can.
-g
flag.
br
(breakpoint)r
(run)run
command starts the execution of the program. If run
is executed with any arguments, those arguments are passed on to the executable as if the program has been started normally. Users normally issue this command after setting breakpoints.
p
(print)print
command displays the value of the argument given, and that argument can be almost anything relevant to the program. Usually, the argument is simply the name of a variable of any complexity, from a simple single value to a structure. An argument can also be an expression valid in the current language, including the use of program variables and library functions, or functions defined in the program being tested.
bt
(backtrace)backtrace
displays the chain of function calls used up until the execution was terminated. This is useful for investigating serious bugs (such as segmentation faults) with elusive causes.
l
(list)list
command shows the line in the source code corresponding to where the program stopped.
c
(continue)continue
command simply restarts the execution of the program, which will continue to execute until it encounters a breakpoint, runs into a specified or emergent condition (e.g. an error), or terminates.
n
(next)continue
, the next
command also restarts execution; however, in addition to the stopping conditions implicit in the continue
command, next
will also halt execution at the next sequential line of code in the current source file.
s
(step)next
, the step
command also halts execution at each sequential line of code in the current source file. However, if execution is currently stopped at a source line containing a function call, GDB stops execution after entering the function call (rather than executing it).
fini
(finish)finish
command resumes executions, but halts when execution returns from a function.
q
(quit)h
(help)help
command provides access to its extensive internal documentation. The command takes arguments: help breakpoint
(or h br
), for example, shows a detailed description of the breakpoint
command. Refer to the help
output of each command for more detailed information.
#include <stdio.h> char hello[] = { "Hello, World!" }; int main() { fprintf (stdout, "%s\n", hello); return (0); }
gcc -g -o hello hello.c
hello
is in the same directory as hello.c
.
gdb
on the hello
binary, i.e. gdb hello
.
gdb
will display the default GDB prompt:
(gdb)
hello
is global, so it can be seen even before the main
procedure starts:
gdb) p hello $1 = "Hello, World!" (gdb) p hello[0] $2 = 72 'H' (gdb) p *hello $3 = 72 'H' (gdb)
print
targets hello[0]
and *hello
require the evaluation of an expression, as does, for example, *(hello + 1)
:
(gdb) p *(hello + 1) $4 = 101 'e'
(gdb) l 1 #include <stdio.h> 2 3 char hello[] = { "Hello, World!" }; 4 5 int 6 main() 7 { 8 fprintf (stdout, "%s\n", hello); 9 return (0); 10 }
list
reveals that the fprintf
call is on line 8. Apply a breakpoint on that line and resume the code:
(gdb) br 8 Breakpoint 1 at 0x80483ed: file hello.c, line 8. (gdb) r Starting program: /home/moller/tinkering/gdb-manual/hello Breakpoint 1, main () at hello.c:8 8 fprintf (stdout, "%s\n", hello);
fprintf
call, executing it: (gdb) n Hello, World! 9 return (0);
continue
command thousands of times just to get to the iteration that crashed.
#include <stdio.h> main() { int i; for (i = 0;; i++) { fprintf (stdout, "i = %d\n", i); } }
(gdb) br 8 if i == 8936 Breakpoint 1 at 0x80483f5: file iterations.c, line 8. (gdb) r
i = 8931 i = 8932 i = 8933 i = 8934 i = 8935 Breakpoint 1, main () at iterations.c:8 8 fprintf (stdout, "i = %d\n", i);
info br
) to review the breakpoint status:
(gdb) info br Num Type Disp Enb Address What 1 breakpoint keep y 0x080483f5 in main at iterations.c:8 stop only if i == 8936 breakpoint already hit 1 time
set follow-fork-mode
feature is used to overcome this barrier allowing programmers to follow a a child process instead of the parent process.
set follow-fork-mode parent
set follow-fork-mode child
show follow-fork-mode
set detach-on-fork
command to debug both the parent and the child processes after a fork, or retain debugger control over them both.
set detach-on-fork on
follow-fork-mode
will be detached and allowed to run independently. This is the default.
set detach-on-fork off
follow-fork-mode
) is debugged as usual, while the other is suspended.
show detach-on-fork
detach-on-fork
mode is on or off.
#include <unistd.h> int main() { pid_t pid; const char *name; pid = fork(); if (pid == 0) { name = "I am the child"; } else { name = "I am the parent"; } return 0; }
gcc -g fork.c -o fork -lpthread
and examined under GDB will show:
gdb ./fork [...] (gdb) break main Breakpoint 1 at 0x4005dc: file fork.c, line 8. (gdb) run [...] Breakpoint 1, main () at fork.c:8 8 pid = fork(); (gdb) next Detaching after fork from child process 3840. 9 if (pid == 0) (gdb) next 15 name = "I am the parent"; (gdb) next 17 return 0; (gdb) print name $1 = 0x400717 "I am the parent"
set follow-fork-mode child
.
(gdb) set follow-fork-mode child (gdb) break main Breakpoint 1 at 0x4005dc: file fork.c, line 8. (gdb) run [...] Breakpoint 1, main () at fork.c:8 8 pid = fork(); (gdb) next [New process 3875] [Thread debugging using libthread_db enabled] [Switching to Thread 0x7ffff7fd5720 (LWP 3875)] 9 if (pid == 0) (gdb) next 11 name = "I am the child"; (gdb) next 17 return 0; (gdb) print name $2 = 0x400708 "I am the child" (gdb)
.gdbinit
.
set follow-fork-mode ask
is added to ~/.gdbinit
, then ask mode becomes the default mode.
set non-stop on
and set target-async on
. These can be added to .gdbinit
. Once that functionality is turned on, GDB is ready to conduct thread debugging.
#include <stdio.h> #include <pthread.h> #include <unistd.h> pthread_t thread; void* thread3 (void* d) { int count3 = 0; while(count3 < 1000){ sleep(10); printf("Thread 3: %d\n", count3++); } return NULL; } void* thread2 (void* d) { int count2 = 0; while(count2 < 1000){ printf("Thread 2: %d\n", count2++); } return NULL; } int main (){ pthread_create (&thread, NULL, thread2, NULL); pthread_create (&thread, NULL, thread3, NULL); //Thread 1 int count1 = 0; while(count1 < 1000){ printf("Thread 1: %d\n", count1++); } pthread_join(thread,NULL); return 0; }
gcc -g three-threads.c -o three-threads -lpthread gdb ./three-threads
(gdb) break thread3 Breakpoint 1 at 0x4006c0: file three-threads.c, line 9. (gdb) break thread2 Breakpoint 2 at 0x40070c: file three-threads.c, line 20. (gdb) break main Breakpoint 3 at 0x40074a: file three-threads.c, line 30.
(gdb) run [...] Breakpoint 3, main () at three-threads.c:30 30 pthread_create (&thread, NULL, thread2, NULL); [...] (gdb) info threads * 1 Thread 0x7ffff7fd5720 (LWP 4620) main () at three-threads.c:30 (gdb)
info threads
provides a summary of the program's threads and some details about their current state. In this case there is only one thread that has been created so far.
(gdb) next [New Thread 0x7ffff7fd3710 (LWP 4687)] 31 pthread_create (&thread, NULL, thread3, NULL); (gdb) Breakpoint 2, thread2 (d=0x0) at three-threads.c:20 20 int count2 = 0; next [New Thread 0x7ffff75d2710 (LWP 4688)] 34 int count1 = 0; (gdb) Breakpoint 1, thread3 (d=0x0) at three-threads.c:9 9 int count3 = 0; info threads 3 Thread 0x7ffff75d2710 (LWP 4688) thread3 (d=0x0) at three-threads.c:9 2 Thread 0x7ffff7fd3710 (LWP 4687) thread2 (d=0x0) at three-threads.c:20 * 1 Thread 0x7ffff7fd5720 (LWP 4620) main () at three-threads.c:34
thread <thread number>
command to switch the focus to another thread.
(gdb) thread 2 [Switching to thread 2 (Thread 0x7ffff7fd3710 (LWP 4687))]#0 thread2 (d=0x0) at three-threads.c:20 20 int count2 = 0; (gdb) list 15 return NULL; 16 } 17 18 void* thread2 (void* d) 19 { 20 int count2 = 0; 21 22 while(count2 < 1000){ 23 printf("Thread 2: %d\n", count2++); 24 }
(gdb) next 22 while(count2 < 1000){ (gdb) print count2 $1 = 0 (gdb) next 23 printf("Thread 2: %d\n", count2++); (gdb) next Thread 2: 0 22 while(count2 < 1000){ (gdb) next 23 printf("Thread 2: %d\n", count2++); (gdb) print count2 $2 = 1 (gdb) info threads 3 Thread 0x7ffff75d2710 (LWP 4688) thread3 (d=0x0) at three-threads.c:9 * 2 Thread 0x7ffff7fd3710 (LWP 4687) thread2 (d=0x0) at three-threads.c:23 1 Thread 0x7ffff7fd5720 (LWP 4620) main () at three-threads.c:34 (gdb)
(gdb) thread 3 [Switching to thread 3 (Thread 0x7ffff75d2710 (LWP 4688))]#0 thread3 (d=0x0) at three-threads.c:9 9 int count3 = 0; (gdb) list 4 5 pthread_t thread; 6 7 void* thread3 (void* d) 8 { 9 int count3 = 0; 10 11 while(count3 < 1000){ 12 sleep(10); 13 printf("Thread 3: %d\n", count3++); (gdb)
continue
.
(gdb) continue & (gdb) Thread 3: 0 Thread 3: 1 Thread 3: 2 Thread 3: 3
continue
. This allows the GDB prompt to return so other commands can be executed. Using the interrupt
, execution can be stopped should thread 3 become interesting again.
(gdb) interrupt [Thread 0x7ffff75d2710 (LWP 4688)] #3 stopped. 0x000000343f4a6a6d in nanosleep () at ../sysdeps/unix/syscall-template.S:82 82 T_PSEUDO (SYSCALL_SYMBOL, SYSCALL_NAME, SYSCALL_NARGS)
(gdb) thread 1 [Switching to thread 1 (Thread 0x7ffff7fd5720 (LWP 4620))]#0 main () at three-threads.c:34 34 int count1 = 0; (gdb) next 36 while(count1 < 1000){ (gdb) next 37 printf("Thread 1: %d\n", count1++); (gdb) next Thread 1: 0 36 while(count1 < 1000){ (gdb) next 37 printf("Thread 1: %d\n", count1++); (gdb) next Thread 1: 1 36 while(count1 < 1000){ (gdb) next 37 printf("Thread 1: %d\n", count1++); (gdb) next Thread 1: 2 36 while(count1 < 1000){ (gdb) print count1 $3 = 3 (gdb) info threads 3 Thread 0x7ffff75d2710 (LWP 4688) 0x000000343f4a6a6d in nanosleep () at ../sysdeps/unix/syscall-template.S:82 2 Thread 0x7ffff7fd3710 (LWP 4687) thread2 (d=0x0) at three-threads.c:23 * 1 Thread 0x7ffff7fd5720 (LWP 4620) main () at three-threads.c:36 (gdb)
-fno-var-tracking-assignments
. In addition, the VTA infrastructure includes the new gcc
option -fcompare-debug
. This option tests code compiled by GCC with debug information and without debug information: the test passes if the two binaries are identical. This test ensures that executable code is not affected by any debugging options, which further ensures that there are no hidden bugs in the debug code. Note that -fcompare-debug
adds significant cost in compilation time. Refer to man gcc
for details about this option.
print
outputs comprehensive debugging information for a target application. GDB aims to provide as much debugging data as it can to users; however, this means that for highly complex programs the amount of data can become very cryptic.
print
output. GDB does not even empower users to easily create tools that can help decipher program data. This makes the practice of reading and understanding debugging data quite arcane, particularly for large, complex projects.
print
output (and make it more meaningful) is to revise and recompile GDB. However, very few developers can actually do this. Further, this practice will not scale well, particularly if the developer needs to also debug other programs that are heterogeneous and contain equally complex debugging data.
enum Fruits {Orange, Apple, Banana}; class Fruit { int fruit; public: Fruit (int f) { fruit = f; } }; int main() { Fruit myFruit(Apple); return 0; // line 17 }
g++ -g fruit.cc -o fruit
. Now, examine this program with GDB.
gdb ./fruit [...] (gdb) break 17 Breakpoint 1 at 0x40056d: file fruit.cc, line 17. (gdb) run Breakpoint 1, main () at fruit.cc:17 17 return 0; // line 17 (gdb) print myFruit $1 = {fruit = 1}
{fruit = 1}
is correct because that is the internal representation of 'fruit' in the data structure 'Fruit'. However, this is not easily read by humans as it is difficult to tell which fruit the integer 1 represents.
fruit.py class FruitPrinter: def __init__(self, val): self.val = val def to_string (self): fruit = self.val['fruit'] if (fruit == 0): name = "Orange" elif (fruit == 1): name = "Apple" elif (fruit == 2): name = "Banana" else: name = "unknown" return "Our fruit is " + name def lookup_type (val): if str(val.type) == 'Fruit': return FruitPrinter(val) return None gdb.pretty_printers.append (lookup_type)
gdb.pretty_printers.append (lookup_type)
adds the function lookup_type
to GDB's list of printer lookup functions.
lookup_type
is responsible for examining the type of object to be printed, and returning an appropriate pretty printer. The object is passed by GDB in the parameter val
. val.type
is an attribute which represents the type of the pretty printer.
FruitPrinter
is where the actual work is done. More specifically in the to_string
function of that Class. In this function, the integer fruit
is retrieved using the python dictionary syntax self.val['fruit']
. Then the name is determined using that value. The string returned by this function is the string that will be printed to the user.
fruit.py
, it must then be loaded into GDB with the following command:
(gdb) python execfile("fruit.py")
perf
, and SystemTap) to collect profiling data. Each tool is suitable for performing specific types of profile runs, as described in the following sections.
malloc
, new
, free
, and delete
. Memcheck is perhaps the most used Valgrind tool, as memory management problems can be difficult to detect using other means. Such problems often remain undetected for long periods, eventually causing crashes that are difficult to diagnose.
cachegrind
, callgrind
can model cache behavior. However, the main purpose of callgrind
is to record callgraphs data for the executed code.
lackey
tool, which is a sample that can be used as a template for generating your own tools.
valgrind
package and its dependencies install all the necessary tools for performing a Valgrind profile run. To profile a program with Valgrind, use:
valgrind --tool=toolname
program
toolname
. In addition to the suite of Valgrind tools, none
is also a valid argument for toolname
; this argument allows you to run a program under Valgrind without performing any profiling. This is useful for debugging or benchmarking Valgrind itself.
--log-file=filename
. For example, to check the memory usage of the executable file hello
and send profile information to output
, use:
valgrind --tool=memcheck --log-file=output hello
eclipse-valgrind
package. For more information about this plug-in, refer to Valgrind Integration User Guide in the Eclipse .
man valgrind
. Red Hat Enterprise Linux 6 also provides a comprehensive Valgrind Documentation book, available as PDF and HTML in:
file:///usr/share/doc/valgrind-version
/valgrind_manual.pdf
file:///usr/share/doc/valgrind-version
/html/index.html
eclipse-valgrind
package.
x
number of events (e.g. cache misses or branch instructions). Each sample also contains information on where it occurred in the program.
oprofiled
), running the program to be profiled, collecting the system profile data, and parsing it into a more understandable format. OProfile provides several tools for every step of this process.
opreport
command outputs binary image summaries, or per-symbol data, from OProfile profiling sessions.
opannotate
command outputs annotated source and/or assembly from the profile data of an OProfile session.
oparchive
command generates a directory populated with executable, debug, and OProfile sample files. This directory can be moved to another machine (via tar
), where it can be analyzed offline.
opreport
, the opgprof
command outputs profile data for a given binary image from an OProfile session. The output of opgprof
is in gprof
format.
man oprofile
. For detailed information on each OProfile command, refer to its corresponding man
page. Refer to Section 7.3.4, “OProfile Documentation” for other available documentation on OProfile.
oprofile
package and its dependencies install all the necessary utilities for performing an OProfile profile run. To instruct the OProfile to profile all the application running on the system and to group the samples for the shared libraries with the application using the library, run the following command as root:
opcontrol --no-vmlinux --separate=library --start
--start-daemon
instead. The --stop
option halts data collection, while the --shutdown
terminates the OProfile daemon.
opreport
, opannotate
, or opgprof
to display the collected profiling data. By default, the data collected by the OProfile daemon is stored in /var/lib/oprofile/samples/
.
eclipse-oprofile
package. For more information about this plug-in, refer to OProfile Integration User Guide in the Eclipse (also provided by eclipse-profile
).
man oprofile
. Red Hat Enterprise Linux 6 also provides two comprehensive guides to OProfile in file:///usr/share/doc/oprofile-version
/
:
file:///usr/share/doc/oprofile-version
/oprofile.html
file:///usr/share/doc/oprofile-version
/internals.html
eclipse-oprofile
package.
kernel-variant
-devel-version
kernel-variant
-debuginfo-version
kernel-variant
-debuginfo-common-arch-version
kernel-variant
-debuginfo-common-arch-version
. It was originally kernel-variant
-debuginfo-common-version
in Red Hat Enterprise Linux 5.
--remote
option. This allows users to build the SystemTap module locally, and then execute it remotely via SSH. The syntax to use this is --remote [USER@]HOSTNAME
; set the execution target to the specified SSH host, optionally using a different username. This option may be repeated to target multiple execution targets. Passes 1-4 are completed locally as normal to build the scrip, and then pass 5 copies the module to the target and runs it.
sudo
) access to their own machines. In addition, full SystemTap functionality should also be restricted to privileged users, as this can provide the ability to completely take control of a system.
--unprivileged
. This option allows an unprivileged user to run stap
. Of course, several restrictions apply to unprivileged users that attempt to run stap
.
stapusr
but is not a member of the group stapdev
(and is not root).
--unprivileged
option is used, the server checks the script against the constraints imposed for unprivileged users. If the checks are successful, the server compiles the script and signs the resulting module using a self-generated certificate. When the client attempts to load the module, staprun
first verifies the signature of the module by checking it against a database of trusted signing certificates maintained and authorized by root.
staprun
is assured that:
--unprivileged
option.
--unprivileged
option.
stap-server
initscript are automatically authorized to receive connections from all clients on the same host.
stap-server
is automatically authorized as a trusted signer on the host in which it runs. If the compile server was initiated through other means, it is not automatically authorized as such.
systemtap
package:
file:///usr/share/doc/systemtap-version
/SystemTap_Beginners_Guide/index.html
file:///usr/share/doc/systemtap-version
/SystemTap_Beginners_Guide.pdf
file:///usr/share/doc/systemtap-version
/tapsets/index.html
file:///usr/share/doc/systemtap-version
/tapsets.pdf
perf
to analyze the collected performance data.
perf
commands include the following:
perf
command provides overall statistics for common performance events, including instructions executed and clock cycles consumed. Options allow selection of events other than the default measurement events.
perf
command records performance data into a file which can be later analyzed using perf report
.
perf
command reads the performance data from a file and analyzes the recorded data.
perf
command lists the events available on a particular machine. These events will vary based on the performance monitoring hardware and the software configuration of the system.
perf help
to obtain a complete list of perf
commands. To retrieve man
page information on each perf
command, use perf help command
.
make
and its children, use the following command:
perf stat -- make all
perf
command will collect a number of different hardware and software counters. It will then print the following information:
Performance counter stats for 'make all': 244011.782059 task-clock-msecs # 0.925 CPUs 53328 context-switches # 0.000 M/sec 515 CPU-migrations # 0.000 M/sec 1843121 page-faults # 0.008 M/sec 789702529782 cycles # 3236.330 M/sec 1050912611378 instructions # 1.331 IPC 275538938708 branches # 1129.203 M/sec 2888756216 branch-misses # 1.048 % 4343060367 cache-references # 17.799 M/sec 428257037 cache-misses # 1.755 M/sec 263.779192511 seconds time elapsed
perf
tool can also record samples. For example, to record data on the make
command and its children, use:
perf record -- make all
[ perf record: Woken up 42 times to write data ] [ perf record: Captured and wrote 9.753 MB perf.data (~426109 samples) ]
perf.data
to determine the relative frequency of samples. The report output includes the command, object, and function for the samples. Use perf report
to output an analysis of perf.data
. For example, the following command produces a report of the executable that consumes the most time:
perf report --sort=comm
# Samples: 1083783860000 # # Overhead Command # ........ ............... # 48.19% xsltproc 44.48% pdfxmltex 6.01% make 0.95% perl 0.17% kernel-doc 0.05% xmllint 0.05% cc1 0.03% cp 0.01% xmlto 0.01% sh 0.01% docproc 0.01% ld 0.01% gcc 0.00% rm 0.00% sed 0.00% git-diff-files 0.00% bash 0.00% git-diff-index
make
spends most of this time in xsltproc
and the pdfxmltex
. To reduce the time for the make
to complete, focus on xsltproc
and pdfxmltex
. To list of the functions executed by xsltproc
, run:
perf report -n --comm=xsltproc
comm: xsltproc # Samples: 472520675377 # # Overhead Samples Shared Object Symbol # ........ .......... ............................. ...... # 45.54%215179861044 libxml2.so.2.7.6 [.] xmlXPathCmpNodesExt 11.63%54959620202 libxml2.so.2.7.6 [.] xmlXPathNodeSetAdd__internal_alias 8.60%40634845107 libxml2.so.2.7.6 [.] xmlXPathCompOpEval 4.63%21864091080 libxml2.so.2.7.6 [.] xmlXPathReleaseObject 2.73%12919672281 libxml2.so.2.7.6 [.] xmlXPathNodeSetSort__internal_alias 2.60%12271959697 libxml2.so.2.7.6 [.] valuePop 2.41%11379910918 libxml2.so.2.7.6 [.] xmlXPathIsNaN__internal_alias 2.19%10340901937 libxml2.so.2.7.6 [.] valuePush__internal_alias
ftrace
framework provides users with several tracing capabilities, accessible through an interface much simpler than SystemTap's. This framework uses a set of virtual files in the debugfs
file system; these files enable specific tracers. The ftrace
function tracer simply outputs each function called in the kernel in real time; other tracers within the ftrace
framework can also be used to analyze wakeup latency, task switches, kernel events, and the like.
ftrace
, making it a flexible solution for analyzing kernel events. The ftrace
framework is useful for debugging or analyzing latencies and performance issues that take place outside of user-space. Unlike other profilers documented in this guide, ftrace
is a built-in feature of the kernel.
CONFIG_FTRACE=y
option. This option provides the interfaces needed by ftrace
. To use ftrace
, mount the debugfs
file system as follows:
mount -t debugfs nodev /sys/kernel/debug
ftrace
utilities are located in /sys/kernel/debug/tracing/
. View the /sys/kernel/debug/tracing/available_tracers
file to find out what tracers are available for your kernel:
cat /sys/kernel/debug/tracing/available_tracers
power wakeup irqsoff function sysprof sched_switch initcall nop
/sys/kernel/debug/tracing/current_tracer
. For example, wakeup
traces and records the maximum time it takes for the highest-priority task to be scheduled after the task wakes up. To use it:
echo wakeup > /sys/kernel/debug/tracing/current_tracer
/sys/kernel/debug/tracing/tracing_on
, as in:
echo 1 > /sys/kernel/debug/tracing/tracing_on
(enables tracing)
echo 0 > /sys/kernel/debug/tracing/tracing_on
(disables tracing)
/sys/kernel/debug/tracing/trace
, but is meant to be piped into a command. Unlike /sys/kernel/debug/tracing/trace
, reading from this file consumes its output.
--help
or --man
pages. The most common ones are:
build
create
create_brand
package
publican create
command to create a new document including all the required files.
publican create
. These are:
--help
publican create
command.
--name Doc_Name
--lang Language_Code
--lang
option sets the xml_lang
in the publican.cfg
file and creates a directory with this name in the document directory.
--version version
--product Product_Name
--brand brand
--help
for more options.
publican create
lest the files and directories be added to the user's home directory.
publican.cfg
xml_lang
(the language the book is in, en-US for example), type
(the type of document, a book or a set, for example), and brand
(the branding the document uses, found here: Section 8.1.6, “Brands”. Red Hat, for example.). There are a number of optional parameters but these should be used cautiously as they can cause problems further on in areas like translation. A full list of these advanced parameters can be found in the Publican User Guide. The publican.cfg
file is unlikely to be edited much beyond the initial creation.
book_info.xml
Author_Group.xml
Chapter.xml
Doc_Name.xml
(below) it will not appear in the actual book. When writing content for the publication, new XML files are created, named appropriately (ch-publican.xml, for example) and linked in Doc_Name.xml
. When the book is built, the content of this file will form the content of the book. This specific file is unlikely to ever be edited but others like it will be edited constantly as content is changed, updated, added to or removed.
Doc_Name
.xml
Doc_Name
.ent
or the book will not build.
Doc_Name
.ent
YEAR
is set to the current year and HOLDER
has a reminder to place the copyright owner's name there. As with Doc_Name
.xml
, this file will not be called 'Doc_Name' but will be replaced with the title of the document (Developer_Guide.ent, for example). This is only likely to be edited once at the beginning of publication or if the copyright owner changes. This must remain the same as Doc_Name
.xml
or the book will not build.
Revision_History.xml
publican package
is run, the first XML
file containing a <revhistory>
tag is used to build the RPM revision history.
images
folder is created by publican in the document's directory. Store any images used in the document here. Then when entering an image into the document, link to the image inside the images
directory (./images/image1.png
, for example).
extras
in the document's directory. Then, when entering the code sample into the document, link to the file and the folder it is in. This way an example used in several places can be updated only once, and rather than search through a document looking for a specific item to change, all the code examples are located in the one place, saving time and effort.
files
in the publication's directory will allow them to be added to the RPM when the book is compiled.
<xi:include parse="text" href="extras/fork/fork1.c" xmlns:xi="http://www.w3.org/2001/XInclude" />
publican build --formats=chosen_format
--langs=chosen_language
. For example, to build a document in US English and as a single HTML page, run publican build --formats=html-single --langs=en-US
. Provided there are no errors the book will be built into the root directory where the pages can be viewed to see if it has the look required. It is recommended to do this regularly in order to make troubleshooting as easy as possible.
--novalid
Command--novalid
option. This skips over any cross-references and links that point to files or sections of the document that do not yet exist. Instead they are shown as three question marks (???).
publican package --lang=chosen_language
. This will output SRPM packages to tmp/rpm
in the document's directory, and binary RPM packages will go to tmp/rpm/noarch
in the document's directory. By default, these packages are named productname-title-productnumber-[web]-language-edition-pubsnumber.[build_target]
.noarch.file_extension
with the information for each of these sections coming from publican.cfg
.
publican create_brand --name=brand
--lang=language_code
. This will create a folder called publican-brand
and place it in the publication's directory. This folder contains the following files:
COPYING
defaults.cfg
publican.cfg
. Specifications from this file are applied first before applying those in the publican.cfg
file. Therefore, values in the publican.cfg
file over ride those in the defaults.cfg
file. It is best used for aspects that are routinely used throughout the documents but still allows writers to change settings.
overrides.cfg
publican-brand.spec
. Specifications from this file are applied last, thus overriding both the defaults.cfg
and the publican.cfg
. It is best used for aspects the writers are not allowed to change.
publican.cfg
publican.cfg
file for a publication in that it configures basic information for the brand, such as version, release number and brand name.
publican-brand.spec
README
Feedback.xml
Legal_Notice.xml:
images
subdirectory contains a number of images of both raster (PNG) and vector (SVG) formats and serve as place holders for various navigation icons that can be changed simply by replacing the images. The css
folder contains overrides.css
, which sets the visual style for the brand, overriding those in common.css
.
publican package
command. By default this creates source RPM packages (SRPM Packages) but it can also create binary RPM packages using the option --binary
. Packages are named publican-brand-version-release.[build_target].
[noarch].file_extension
with the required parameters taken from the publican.cfg
file.
[build_target].noarch
before the file extension, where [build_target]
represents the operating system and version that the package is built for as set by the os_ver
parameter in the publican.cfg
file. The noarch element specifies that the package can be installed on any system, regardless of the system architecture.
--man
, --help
and --help_actions
pages accessed from the terminal.
yum install publican-doc
.
doxygen -g config-file
. This creates a template configuration file that can be easily edited. The variable config-file
is the name of the configuration file. If it is committed from the command it is simply called Doxyfile by default. Another useful option while creating the configuration file is the use of a minus sign (-
) as the file name. This is useful for scripting as it will cause Doxygen to attempt to read the configuration file from standard input (stdin
).
TAGNAME = VALUE1 VALUE2...
doxywizard
. If this is the preferred method of editing then documentation for this function can be found on the Doxywizard usage page of the Doxygen documentation website.
INPUT
FILE_PATTERNS
*.cpp
or *.h
) can be added to this tag allowing only files that match one of the patterns to be parsed.
RECURSIVE
yes
will allow recursive parsing of a source tree.
EXCLUDE
and EXCLUDE_PATTERNS
test
directories from a source tree, use EXCLUDE_PATTERNS = */test/*
.
EXTRACT_ALL
yes
, doxygen will pretend that everything in the source files is documented to give an idea of how a fully documented project would look. However, warnings regarding undocumented members will not be generated in this mode; set it back to no
when finished to correct this.
SOURCE_BROWSER
and INLINE_SOURCES
SOURCE_BROWSER
tag to yes
doxygen will generate a cross-reference to analyze a piece of software's definition in its source files with the documentation existing about it. These sources can also be included in the documentation by setting INLINE_SOURCES
to yes
.
doxygen config-file
creates html
, rtf
, latex
, xml
, and / or man
directories in whichever directory doxygen is started in, containing the documentation for the corresponding filetype.
HTML OUTPUT
index.html
in the html
directory.
LaTeX OUTPUT
Makefile
into the latex
directory in order to make it easy to first compile the Latex documentation. To do this, use a recent teTeX distribution. What is contained in this directory depends on whether the USE_PDFLATEX
is set to no
. Where this is true, typing make
while in the latex
directory generates refman.dvi
. This can then be viewed with xdvi
or converted to refman.ps
by typing make ps
. Note that this requires dvips
.
make ps_2on1
prints two pages on one physical page. It is also possible to convert to a PDF if a ghostscript interpreter is installed by using the command make pdf
. Another valid command is make pdf_2on1
. When doing this set PDF_HYPERLINKS
and USE_PDFLATEX
tags to yes
as this will cause Makefile
will only contain a target to build refman.pdf
directly.
RTF OUTPUT
refman.rtf
. Some information is encoded using fields but this can be shown by selecting all (CTRL+A
or Edit -> select all) and then right-click and select the toggle fields
option from the drop down menu.
XML OUTPUT
xml
directory consists of a number of files, each compound gathered by doxygen, as well as an index.xml
. An XSLT script, combine.xslt
, is also created that is used to combine all the XML files into a single file. Along with this, two XML schema files are created, index.xsd
for the index file, and compound.xsd
for the compound files, which describe the possible elements, their attributes, and how they are structured.
MAN PAGE OUTPUT
man
directory can be viewed with the man
program after ensuring the manpath
has the correct man directory in the man path. Be aware that due to limitations with the man page format, information such as diagrams, cross-references and formulas will be lost.
EXTRACT_ALL
is set to no
so warnings are correctly generated and documentation is built properly. This allows doxygen to create documentation for documented members, files, classes and namespaces.
/**
* ... documentation
...
*/
/*!
* ... documentation
...
*/
///
/// ... documentation
///
//!
//! ... documentation
...
//!
/////////////////////////////////////////////////
/// ... documentation
...
/////////////////////////////////////////////////
/********************************************//**
* ... documentation
...
***********************************************/
\brief
above one of the comment blocks. This brief section ends at the end of the paragraph and any further paragraphs are the detailed descriptions.
/*! \briefbrief documentation
. *brief documentation
. * *detailed documentation
. */
JAVADOC_AUTOBRIEF
to yes
, the brief description will only last until the first dot followed by a space or new line. Consequentially limiting the brief description to a single sentence.
/**Brief documentation
.Detailed documentation continues * from here
. */
///Brief documentation
. /**Detailed documentation
. */
//!Brief documentation.
//!Detailed documentation //! starts here
JAVADOC_AUTOBRIEF
needs to be set to no
.
int var; /*!< detailed description after the member
*/
int var; /**< detailed description after the member
*/
int var; //!< detailed description after the member
//!<
int var; ///< detailed description after the member
///<
int var; //!< brief description after the member
int var; ///< brief description after the member
/*! \classTest
\briefA test class
.A more detailed description of class
. */
\class
is used. This indicates that the comment block contains documentation for the class 'Test'. Others are:
\struct
: document a C-struct
\union
: document a union
\enum
: document an enumeration type
\fn
: document a fcuntion
\var
: document a variable, typedef, or enum value
\def
: document a #define
\typedef
: document a type definition
\file
: document a file
\namespace
: document a namespace
\package
: document a Java package
\interface
: document an IDL interface
Revision History | |||
---|---|---|---|
Revision 1-15 | Fri Dec 02 2011 | ||
| |||
Revision 1-8 | Mon Nov 14 2011 | ||
| |||
Revision 1-6 | Wed Nov 09 2011 | ||
| |||
Revision 1-4 | Wed Nov 02 2011 | ||
| |||
Revision 1-1 | Wed Oct 26 2011 | ||
| |||
Revision 1-1 | Tue Sep 27 2011 | ||
| |||
Revision 0-86 | Fri Sep 02 2011 | ||
| |||
Revision 0-83 | Tue Aug 30 2011 | ||
| |||
Revision 0-82 | Mon Aug 15 2011 | ||
| |||
Revision 0-81 | Thu Aug 04 2011 | ||
| |||
Revision 0-80 | Fri Jul 29 2011 | ||
| |||
Revision 0-76 | Mon Jun 20 2011 | ||
| |||
Revision 0-72 | Mon May 30 2011 | ||
| |||
Revision 0-71 | Tue May 24 2011 | ||
| |||
Revision 0-69 | Thu May 19 2011 | ||
| |||
Revision 0-68 | Thu May 19 2011 | ||
| |||
Revision 0-67 | Wed May 18 2011 | ||
| |||
Revision 0-66 | Tue May 17 2011 | ||
| |||
Revision 0-65 | Tue May 17 2011 | ||
| |||
Revision 0-60 | Mon May 16 2011 | ||
| |||
Revision 0-55 | Mon May 09 2011 | ||
| |||
Revision 0-50 | Wed Apr 27 2011 | ||
| |||
Revision 0-45 | Mon Mar 28 2011 | ||
| |||
Revision 0-41 | Fri Feb 04 2011 | ||
| |||
Revision 0-40 | Tue Jan 25 2011 | ||
| |||
Revision 0-39 | Tue Dec 21 2010 | ||
| |||
Revision 0-38 | Tue Dec 14 2010 | ||
| |||
Revision 0-37 | Tue Dec 07 2010 | ||
| |||
Revision 0-36 | Thu Dec 02 2010 | ||
| |||
Revision 0-35 | Thu Dec 02 2010 | ||
| |||
Revision 0-34 | Wed Dec 01 2010 | ||
| |||
Revision 0-33 | Mon Nov 29 2010 | ||
| |||
Revision 0-32 | Mon Nov 15 2010 | ||
| |||
Revision 0-31 | Mon Nov 14 2010 | ||
|