Find the answer to your Linux question:
Page 1 of 3 1 2 3 LastLast
Results 1 to 10 of 22
Hi Forum! In the ordering of files I keep I need links to directories. Sometimes I even need to move directories to new locations. I have tried using symlinks, but ...
Enjoy an ad free experience by logging in. Not a member yet? Register.
  1. #1
    Just Joined!
    Join Date
    Sep 2010
    Posts
    13

    Question link directories, symlink not updated, hard link not supported


    Hi Forum!

    In the ordering of files I keep I need links to directories. Sometimes I even need to move directories to new locations.

    I have tried using symlinks, but they become dead when I move the directory they point to.
    I have tried hard links, but I haven't found any Linux file system that would support hard linked directories.

    How can I achieve that a complex structure of directories (currently with symlinks for directories and hard links for files) keep symlinks live when directories are moved?
    - is there any utility that updates symlinks when a directory is moved?
    - is there any Linux filesystem that supports hard linked directories?
    - is there any good Linux interface to the new NTFS (the only file system I know to support automatically updating directory links, called directory junctions)?

    Thanks

    -- hardlink

  2. #2
    Linux Guru Rubberman's Avatar
    Join Date
    Apr 2009
    Location
    I can be found either 40 miles west of Chicago, in Chicago, or in a galaxy far, far away.
    Posts
    11,677
    Basically, you are SOL, though there are ways to deal wtih the symlink problem. For that, you need two-levels of symbolic links. One is a global location that you manually update when a directory is moved. The other is that all other links to the directory are to the global location. That way, only one link needs to be updated, vs possibly hundreds of instances. Call that global location a "shadow" directory tree. You will only need to update the shadow links when moving stuff.
    Sometimes, real fast is almost as good as real time.
    Just remember, Semper Gumbi - always be flexible!

  3. #3
    Just Joined!
    Join Date
    Sep 2010
    Posts
    13
    Thanks Rubberman, this is a workaround, indeed. I'm thinking about a variant where I have a flat listing of directories and all my subdirectories are symlinks to one element of the flat listing.
    However, while I can easily list hard linked regular files and I can determine where the various copies of hard linked files are - I don't know how to list symlinks pointing to the same directory. Any idea for this?

    Also if there was an other solution without an additional symlink for each directory I would prefer that.

  4. $spacer_open
    $spacer_close
  5. #4
    Linux Engineer Kloschüssel's Avatar
    Join Date
    Oct 2005
    Location
    Italy
    Posts
    773
    'm thinking about a variant where I have a flat listing of directories and all my subdirectories are symlinks to one element of the flat listing.
    This is a stupid idea, cause performance REALLY goes down if you have too many directories/files on the same inode and after a certain count of child inodes you really run into problems cause the underlying filesystem has certain limits. Please refer to the theory behind B*-tree datastructures or graph theory in general to understand why.

    Anyway, you could have kind of storage directories that include only files named by a CRC of their content and sub-treed by parts of that CRC. I already set up a multimedia system that had a large database behind where you could tag elements and such, but I was REALLY worried about what would happen if the database would get screwed up. In the end there is only one type of application that would need such a thing and that is the detection of duplicate files. But even that would make problems cause even with the best CRC method you run into hash collisions sooner or later (in fact with most of them rather early) that map different files to the same file and you would loose data in the worst case.

    For example MD5:

    The likelihood of not having a collision (assuming md5 distributes its keys evenly) is 99.99999999999999999999999999999999706126%. According to the birthday problem you reach not only a probability but a certainty of a collision too early to depend on it (as far as I know around every 8159221'th MD5 is a collision; thus with 10000 files you have ~0.001% chance to loose a file).

    I don't know how to list symlinks pointing to the same directory. Any idea for this?
    You can't. A symlink is a one way connection. It would require you to search the whole directory tree starting from root (imagine that on a rather large NAS with thousands of files in thousands of inodes).
    Last edited by Kloschüssel; 09-07-2010 at 08:25 AM.

  6. #5
    Just Joined!
    Join Date
    Sep 2010
    Posts
    13
    Quote Originally Posted by Kloschüssel View Post
    performance REALLY goes down if you have too many directories/files on the same inode
    Thanks for pointing this out. I can use subdirectories to store the directories I symlink to.
    Constructing the subdirectories in my case does not need CRCs and there is no risk of collision because I can use creation time (plus a suffix that distinguishes between the very few files with the same creation time) as a unique name.

  7. #6
    Just Joined!
    Join Date
    Sep 2010
    Posts
    13
    Quote Originally Posted by hardlink View Post
    how to list symlinks pointing to the same directory
    Quote Originally Posted by Kloschüssel View Post
    You can't. A symlink is a one way connection. It would require you to search the whole directory tree starting from root (imagine that on a rather large NAS with thousands of files in thousands of inodes).
    I will need this though. But once in a while I could survive searching the whole tree.
    Maybe I could use pairs of symlinks one pointing from child to parent the other from parent to child.

  8. #7
    Linux Engineer Kloschüssel's Avatar
    Join Date
    Oct 2005
    Location
    Italy
    Posts
    773
    That would be a working workaround. But I would rather build up a database that stores the file-links* information.

  9. #8
    Just Joined!
    Join Date
    Sep 2010
    Posts
    13
    Quote Originally Posted by Kloschüssel View Post
    I would rather build up a database that stores the file-links* information.
    My purpose is to store files of various types in subdirectories, and file systems are databases for exactly this purpose. Programming this database and the generic access to the files would be comparable to programming a new file system.
    This lead me back to one of my original questions: Is there a file system that already supports links to directories that are not broken when the target directory is moved?

  10. #9
    Linux Engineer Kloschüssel's Avatar
    Join Date
    Oct 2005
    Location
    Italy
    Posts
    773
    There are hard links exactly for that purpose.

    Code:
    man cp
    -l, --link
    link files instead of copying
    In case it didn't came you into mind to use a search engine, this also may be of interest to you:

    Hard link - Wikipedia, the free encyclopedia

  11. #10
    Just Joined!
    Join Date
    Sep 2010
    Posts
    13

    Unhappy

    Kloschüssel, I appreciate your helpful comments (low performance when many files in the same directory; difficulty of finding symlinks pointing to the same point), however I feel you mix your help with too much disparage ("stupid idea", "it didn't came you into mind to use a search engine").

    As I have written previously my regular files are already hard linked, my difficulty is that the Linux file systems I know do not support hard linked directories.

Page 1 of 3 1 2 3 LastLast

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •