What happens if you specify two paths in a volume in indexes.conf? For example:
[volume:example]
path = /opt/splunk/examplevolume
path = /opt/splunk/var/lib/splunk
maxVolumeDataSizeMB = 400000
Does splunk send the data to each location, does it split it up across the two paths or does it choose one?
Cheers
↧
What happens if you specify two paths in a volume in indexes.conf?
↧
How is my data still searchable? Is there something wrong with indexes.conf?
hi everyone :
i have set indexes.conf link this:> [qt]>coldToFrozenDir = /SplunkBack/splunk/qt>frozenTimePeriodInSecs = 20736000
20736000 = 240 days
but i can still search last year's data。
splunk enterprise = 6.6.3
thinks
↧
↧
Is my search reliable to check which index size should be increased for data retention of at least 6 months?
Hello guys,
I built this query, do you think it's reliable to check which index should be increased for home/cold sizes?
| tstats latest(_time) as latest,earliest(_time) as earliest WHERE index=* by index host source | eval lasttime=strftime(latest, "%Y-%m-%d") | eval firstevent=strftime(earliest, "%Y-%m-%d")
| eval stoday=strftime(now(),"%Y-%m-%d") | eval months_ago=(now()-15552000) | eval diff=months_ago-earliest | eval resultat=if(match(diff,"-"),"- 6 mois","+ 6 mois") | sort index,host,source,firstevent | fields - latest lasttime stoday months_ago earliest diff
Thanks.
↧
Indexer issue: problem parsing indexes.conf
I'm getting this error where on the indexer
Problem parsing indexes.conf: Cannot load IndexConfig: stanza=clustering Required parameter=homePath not configured
Where is homePath define donthe indexers
↧
Can I create indexes.conf and inputs.conf files on my search heads to send /var/log/ logs to my indexer cluster?
My SHC of 3 members is Linux. I need to create an inputs.conf to ingest /var/log/* and send them to my indexer-cluster. _internal data
from all of my servers is being indexed properly so I believe that the data flow is correct. I believe I need to do two things: 1)
create an indexes.conf file on each search head and 2) create an inputs.conf file on each search head.
Step 1) On my deployer, I created /opt/splunk/etc/master-apps/_cluster/local/indexes.conf and executed splunk apply shcluster-bundle
without errrors. This is the contents of indexes.conf.
[linux]
coldPath = $SPLUNK_DB/linux/colddb
enableDataIntegrityControl = 0
enableTsidxReduction = 0
homePath = $SPLUNK_DB/linux/db
maxTotalDataSizeMB = 512000
thawedPath = $SPLUNK_DB/linux/thaweddb
I cannot find the indexes.conf file on any of my search heads.
2) I also created /shcluster/apps/locallinux/local/inputs.conf and executed splunk apply shcluter-bundle without errors. This is the contents of inputs.conf.
[monitor:///var/log/messages]
disabled = false
index = linux
sourcetype = syslog
[monitor:///var/log/cron]
disabled = false
index = linux
sourcetype = syslog
Same problem as above, I cannot find the inputs.conf file on any of my search heads.
In a separate, but bigger picture of what I am trying to accomplish, on my License Server and on my Monitoring server, I created a linux index and used the web gui to create the inputs AND I have SPLUNK_HOME/etc/system/local/outputs.conf as below.
[indexAndForward]
index = false
[tcpout]
defaultGroup = DSCA_Indexers
forwardedindex.filter.disable = true
indexAndForward = false
[tcpout:DSCA_Indexers]
server=10.20.38.11:9997, 10.20.38.12:9997, 10.20.38.13:9997
My linux information gets to the indexers.
The desired goal is to send ALL Enterprise Server Linux /var/log/* to the indexers.
↧
↧
How can we view the data retention policy we have set?
Hi All,
We have set the data retention has 1 year (365 days) for in cluster master. But when we search the data in Search and Reporting app for an index then we can able to fetch data more than a year too. For audit purpose we need to track what would be the exact data retention and after that there should not be any logs for the same. But in our case we can able to fetch data more than a year too.
So is there any search query that can able to pull the exact data retention which has been set for all indexes and beyond that there should not be any data for that particular index.
These are the configurations which we have been set in cluster master server under the following folder:
/opt/splunk/etc/master-apps/mc_master_indexes/local
[splunk@mon-prod-cm-1 local]$ cat indexes.conf
[default]
frozenTimePeriodInSecs = 31536000
maxTotalDataSizeMB = 20971520
[volume:hot]
path=/data/hot
maxVolumeDataSizeMB=2831156
[volume:cold]
path=/data/cold
maxVolumeDataSizeMB=12268340
So need your quick help regarding the same to get the exact retention which has been set for all indexes.
↧
Edit hot/warm/cold data retentions
Hello
I want to add below configuration to specific indexer
Hot/Warm/Cold Data retention 6 months 1.75TB
Frozen Data retention 6 months
configuration is
[myindex]
coldPath = $path\colddb
enableDataIntegrityControl = 0
enableTsidxReduction = 0
homePath = $path\db
maxTotalDataSizeMB = 1835008
thawedPath = $path\thaweddb
maxDataSize = 1835008
frozenTimePeriodInSecs = 15780000
but when am trying to add anew index i got error like below
The following issues were found with submitted configuration: stanza=myindexparameter=maxDataSize Value supplied='1835008' is illegal; default='750'
↧
Which indexes.conf should I edit to set retirement policy?
Hi,
I'm trying to delete old data due to space issue and I found this http://docs.splunk.com/Documentation/Splunk/6.2.1/Indexer/Setaretirementandarchivingpolicy.
But then I found that I have 4 indexes.conf on my linux. Which one should I edit?
![alt text][1]
[1]: /storage/temp/217903-capture.jpg
↧
Free disk space (5000MB) reached solved but storage did not reduce
Hi,
I just set my retirement policy due to space issue (reference: https://answers.splunk.com/answers/583891/which-indexesconf-should-i-edit-to-set-retirement.html)
My vm used storage is the same before and after I set my retirement policy. Does setting retirement policy to delete anything that is more than 1 month old actually helps to reduce the used storage space?
What files can I delete to reduce the storage space so that I can reduced my Provisional Storage?
![alt text][1]
[1]: /storage/temp/217905-capture.jpg
↧
↧
What index should sysmon data go into and how /where to change the index?
I have successfully installed sysmon and verified the schemaversion matches the schemaversion in the config file (sysmonconfig-export.xml by SwiftonSecurity). I have confirmed that sysmon is running in event viewer (Application and Service Logs > Microsoft > Windows > Sysmon > Operational).
I downloaded and installed the TA-microsoft-sysmon on the search head I use.
I also copied the TA-sysmon folder to the deployment server (\Splunk\etc\deployment-apps\TA-microsoft-sysmon) and then deployed it to my UF running on my test host.
I ran my handy query
|tstats values(sourcetype) WHERE index=* by index
and noticed the data was rolling into the default main index...
How do I change the index to winsysmon ? or does anyone have a better idea which index the sysmon data should go in?
Thank you
↧
Delay in Splunk purging old events
My Splunk is a single Splunk 6.5.x instance, which needs to retain the last 30 days events, so I configured frozenTimePeriodInSecs = 2592000 in indexes.conf. But it does not work fine all the time.
What I could tell is my indexes keep growing, and search with "latest=-30d" shows up some events sometimes. When the index size reaches the maximum index size which was configured in the index creation, or when I restart Splunk instance, the index size decreases to nearly half of the max index size.
Is there any idea of why there is so significant delay for Splunk purging old events? and how to fix it?
↧
What are all those brackets inside indexes.conf?
Hi,
Is there a documentation that explains what are [_internal], [introspection] , [_splunklogger], etc? I'm trying to understand how frozenTimePeriodInSecs affects what. Now I just change all frozenTimePeriodInSecs under all square brackets to set my retirement policy there should be a result why there are so many square brackets there?
↧
How to update indexes.conf files on unclustered production indexers?
I have to define some new indexes on production indexers (in the indexes.conf).
I have 4 indexers running.
Someone else setup an app to send_data_to_indexers (a basic outputs.conf) as follows
[tcpout]
defaultGroup = default-autolb-group
[tcpout:default-autolb-group]
server = splunkindexer1.mycorp.com:9997, splunkindexer2.mycorp.com:9997, splunkindexer3.mycorp.com:9997, splunkindexer4.mycorp.com:9997
[tcpout-server://splunkindexer1.mycorp.com:9997]
My question is: If this outputs.conf is being used for all data being sent to the indexers, then can I edit the indexes.conf on each indexer and then restart one at a time?
Or is there a better way to do this?
Thank you
↧
↧
Error applying with indexes.conf when adding maxWarmDBcount
So whilst modifying my index cluster configuration to be a little smarter with what data is maintained between hot/warm/cold/frozen, I'm a little stuck with maxWarmDBcount.
So if i try and push the following in my _cluster/local/indexes.conf
maxWarmDBcount = 100
homePath = $SPLUNK_DB/test/db
coldPath = $SPLUNK_DB/test/colddb
thawedPath = $SPLUNK_DB/test/thaweddb
repFactor = auto
maxDataSize = 500
it fails with the following
Invalid key in stanza [test2] in /opt/splunk/etc/master-apps/_cluster/local/indexes.conf, line 298: maxWarmDBcount (value: 100).
If i comment out the maxWarmDBcount line it works fine.
I have tried this on multiple index's with varying rules to no avail. Any advise would be welcome !
↧
How can I change the index for data from Splunk Add-on for Unix and Linux
Hi,
Data generated from the app is being forwarded with index = "os" but that doesn't exist on our corporate splunk instance. I edited indexes.conf, replacing "os" with the name of my index, and when that didn't work, I edited inputs.conf and added index = "myIndex", but the data still shows up with index = "os".
Here's what the files look like now.
$ cat ~/etc/apps/Splunk_TA_nix/default/indexes.conf
\# Copyright (C) 2005-2016 Splunk Inc. All Rights Reserved.
[myIndex]
homePath = $SPLUNK_DB/os/db
coldPath = $SPLUNK_DB/os/colddb
thawedPath = $SPLUNK_DB/os/thaweddb
[firedalerts]
coldPath = $SPLUNK_DB/firedalerts/colddb
homePath = $SPLUNK_DB/firedalerts/db
thawedPath = $SPLUNK_DB/firedalerts/thaweddb
$ cat ~/etc/apps/Splunk_TA_nix/local/inputs.conf
[script://./bin/cpu.sh]
disabled = 0
[script://./bin/df.sh]
disabled = 0
[script://./bin/uptime.sh]
disabled = 0
[script://./bin/vmstat.sh]
disabled = 0
index = myIndex
↧
What are the best way and practices to manage indexes
Hi everyone!
I would like to know what are the best practices to manage the index's size.
I read in this post ( https://www.splunk.com/blog/2011/01/03/managing-index-sizes-in-splunk.html ) that we must control the size using maxWarmDBCount and maxTotalDataSize , which are indexes.conf parameters.
But I know is possible to manage this using other two parameters homePath.maxDataSizeMB and coldPath.maxDataSizeMB, that apears to be easier than first configuration.
What is the best way to do it?
↧
Issue in Hot DB volume space after 1TB
Hi team, I have configured the below settings across the all the indexers (in cluster), yet the Hot DB mount space once reached to 1TB, the indexer stops indexing and the SH issues error.
**Settings in indexes.conf**
[volume:hot]
path = /opt/splunk_data_volume/hot
# Total size is 2.9 TB
# ~95% of 2.9 TB is 2.7 TB
maxVolumeDataSizeMB = 2700000
is there anything which is being missed out, please advise!
↧
↧
Help in Configuring Indexes.conf
Hello Guys,
I am trying to configure the indexes.conf, Here is the scenario, I need to have hot bucket for 6 months, warm & cold to another 6 months, after one year the data must follow to frozen bucket. I have defined following settings. Does it comes under Splunk best practices, Below is my index setting
[volume:A]
maxVolumeDataSizeMB = 1000000
[test_index]
homePath = volume:A/test_index/db
coldPath = volume:A/test_index/colddb
thawedPath = $SPLUNK_DB/test_index/thawedd
maxHotBuckets = 10
maxDataSize = 15000
maxHotSpanSecs = 15760000
coldToFrozenDir = /path/
maxTotalDataSizeMB = 1000000
frozenTimePeriodInSecs = 31104000
↧
Configuring Cold To Frozen path if cold is on a C: drive and I want my frozen path to be on a newly created F: drive
I created a new F: drive for my archiving or Frozen path. Currently everything is configured to the default and filling up my C: drive. How do I configure my indexes.conf to have my coldtofrozenpath to be on the F: drive?
↧
2 clusters vs clustered and unclustered vs etc/system/local
We are running a large multi-site clustered indexer environment which is maturing causing us to make some changes to our hot/warm/cold rollover scheme. The one issue we have is 2 small sites have a different hardware setup than the rest of the environment. Due to this, I can't use the same indexes.conf on these 2 smaller sites that I use in the rest of the indexers.
The question then is what is the best approach to handling this situation? As I see it, I have 3 choices:
1. Run 2 clusters, which would force me to add another clustermaster.
2. Run the 2 smaller sites unclustered. My gut tells me this would be undesirable, but I'd like something a little more concrete than my gut.
3. Put an indexes.conf in etc/system/local for the smaller sites to override the indexes.conf we have in our slave-apps dir for the clustered indexers.
I believe option 3 to be the best but wanted to reach out for some verification and potential alternative suggestions.
↧